00:00:00.001 Started by upstream project "autotest-per-patch" build number 132714 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:06.130 The recommended git tool is: git 00:00:06.130 using credential 00000000-0000-0000-0000-000000000002 00:00:06.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.145 Fetching changes from the remote Git repository 00:00:06.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.161 Using shallow fetch with depth 1 00:00:06.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.161 > git --version # timeout=10 00:00:06.173 > git --version # 'git version 2.39.2' 00:00:06.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.187 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.187 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.744 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.756 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.766 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.766 > git config core.sparsecheckout # timeout=10 00:00:10.777 > git read-tree -mu HEAD # timeout=10 00:00:10.793 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.814 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.814 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.917 [Pipeline] Start of Pipeline 00:00:10.932 [Pipeline] library 00:00:10.933 Loading library shm_lib@master 00:00:10.934 Library shm_lib@master is cached. Copying from home. 00:00:10.948 [Pipeline] node 00:01:04.138 Still waiting to schedule task 00:01:04.139 Waiting for next available executor on ‘vagrant-vm-host’ 00:18:06.254 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:18:06.256 [Pipeline] { 00:18:06.271 [Pipeline] catchError 00:18:06.274 [Pipeline] { 00:18:06.289 [Pipeline] wrap 00:18:06.299 [Pipeline] { 00:18:06.310 [Pipeline] stage 00:18:06.312 [Pipeline] { (Prologue) 00:18:06.334 [Pipeline] echo 00:18:06.336 Node: VM-host-SM9 00:18:06.345 [Pipeline] cleanWs 00:18:06.357 [WS-CLEANUP] Deleting project workspace... 00:18:06.357 [WS-CLEANUP] Deferred wipeout is used... 00:18:06.363 [WS-CLEANUP] done 00:18:06.607 [Pipeline] setCustomBuildProperty 00:18:06.707 [Pipeline] httpRequest 00:18:07.027 [Pipeline] echo 00:18:07.029 Sorcerer 10.211.164.20 is alive 00:18:07.039 [Pipeline] retry 00:18:07.040 [Pipeline] { 00:18:07.056 [Pipeline] httpRequest 00:18:07.060 HttpMethod: GET 00:18:07.061 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:18:07.061 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:18:07.062 Response Code: HTTP/1.1 200 OK 00:18:07.062 Success: Status code 200 is in the accepted range: 200,404 00:18:07.062 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:18:07.208 [Pipeline] } 00:18:07.226 [Pipeline] // retry 00:18:07.236 [Pipeline] sh 00:18:07.514 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:18:07.530 [Pipeline] httpRequest 00:18:07.848 [Pipeline] echo 00:18:07.850 Sorcerer 10.211.164.20 is alive 00:18:07.860 [Pipeline] retry 00:18:07.862 [Pipeline] { 00:18:07.879 [Pipeline] httpRequest 00:18:07.884 HttpMethod: GET 00:18:07.885 URL: http://10.211.164.20/packages/spdk_f501a7223ad3a770fdf849dea95166386328da80.tar.gz 00:18:07.885 Sending request to url: http://10.211.164.20/packages/spdk_f501a7223ad3a770fdf849dea95166386328da80.tar.gz 00:18:07.886 Response Code: HTTP/1.1 200 OK 00:18:07.886 Success: Status code 200 is in the accepted range: 200,404 00:18:07.886 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f501a7223ad3a770fdf849dea95166386328da80.tar.gz 00:18:10.160 [Pipeline] } 00:18:10.177 [Pipeline] // retry 00:18:10.183 [Pipeline] sh 00:18:10.461 + tar --no-same-owner -xf spdk_f501a7223ad3a770fdf849dea95166386328da80.tar.gz 00:18:13.754 [Pipeline] sh 00:18:14.038 + git -C spdk log --oneline -n5 00:18:14.038 f501a7223 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:18:14.038 8ffb12d0f lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:18:14.038 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:18:14.038 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:18:14.038 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:18:14.060 [Pipeline] writeFile 00:18:14.077 [Pipeline] sh 00:18:14.377 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:18:14.387 [Pipeline] sh 00:18:14.711 + cat autorun-spdk.conf 00:18:14.711 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:14.711 SPDK_TEST_NVME=1 00:18:14.711 SPDK_TEST_FTL=1 00:18:14.711 SPDK_TEST_ISAL=1 00:18:14.711 SPDK_RUN_ASAN=1 00:18:14.711 SPDK_RUN_UBSAN=1 00:18:14.711 SPDK_TEST_XNVME=1 00:18:14.711 SPDK_TEST_NVME_FDP=1 00:18:14.711 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:14.733 RUN_NIGHTLY=0 00:18:14.735 [Pipeline] } 00:18:14.746 [Pipeline] // stage 00:18:14.761 [Pipeline] stage 00:18:14.763 [Pipeline] { (Run VM) 00:18:14.775 [Pipeline] sh 00:18:15.055 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:18:15.055 + echo 'Start stage prepare_nvme.sh' 00:18:15.055 Start stage prepare_nvme.sh 00:18:15.055 + [[ -n 4 ]] 00:18:15.055 + disk_prefix=ex4 00:18:15.055 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:18:15.055 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:18:15.055 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:18:15.055 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:15.055 ++ SPDK_TEST_NVME=1 00:18:15.055 ++ SPDK_TEST_FTL=1 00:18:15.055 ++ SPDK_TEST_ISAL=1 00:18:15.055 ++ SPDK_RUN_ASAN=1 00:18:15.055 ++ SPDK_RUN_UBSAN=1 00:18:15.055 ++ SPDK_TEST_XNVME=1 00:18:15.055 ++ SPDK_TEST_NVME_FDP=1 00:18:15.055 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:15.055 ++ RUN_NIGHTLY=0 00:18:15.055 + cd /var/jenkins/workspace/nvme-vg-autotest 00:18:15.055 + nvme_files=() 00:18:15.055 + declare -A nvme_files 00:18:15.055 + backend_dir=/var/lib/libvirt/images/backends 00:18:15.055 + nvme_files['nvme.img']=5G 00:18:15.055 + nvme_files['nvme-cmb.img']=5G 00:18:15.055 + nvme_files['nvme-multi0.img']=4G 00:18:15.055 + nvme_files['nvme-multi1.img']=4G 00:18:15.055 + nvme_files['nvme-multi2.img']=4G 00:18:15.055 + nvme_files['nvme-openstack.img']=8G 00:18:15.055 + nvme_files['nvme-zns.img']=5G 00:18:15.055 + (( SPDK_TEST_NVME_PMR == 1 )) 00:18:15.055 + (( SPDK_TEST_FTL == 1 )) 00:18:15.055 + nvme_files["nvme-ftl.img"]=6G 00:18:15.055 + (( SPDK_TEST_NVME_FDP == 1 )) 00:18:15.055 + nvme_files["nvme-fdp.img"]=1G 00:18:15.055 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:18:15.055 + for nvme in "${!nvme_files[@]}" 00:18:15.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:18:15.055 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:18:15.055 + for nvme in "${!nvme_files[@]}" 00:18:15.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:18:15.055 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:18:15.055 + for nvme in "${!nvme_files[@]}" 00:18:15.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:18:15.990 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:18:15.990 + for nvme in "${!nvme_files[@]}" 00:18:15.990 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:18:16.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:18:16.556 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:18:16.556 + echo 'End stage prepare_nvme.sh' 00:18:16.556 End stage prepare_nvme.sh 00:18:16.569 [Pipeline] sh 00:18:16.850 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:18:16.850 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:18:16.850 00:18:16.850 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:18:16.850 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:18:16.850 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:18:16.850 HELP=0 00:18:16.850 DRY_RUN=0 00:18:16.850 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:18:16.850 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:18:16.850 NVME_AUTO_CREATE=0 00:18:16.850 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:18:16.850 NVME_CMB=,,,, 00:18:16.850 NVME_PMR=,,,, 00:18:16.850 NVME_ZNS=,,,, 00:18:16.850 NVME_MS=true,,,, 00:18:16.850 NVME_FDP=,,,on, 00:18:16.850 SPDK_VAGRANT_DISTRO=fedora39 00:18:16.850 SPDK_VAGRANT_VMCPU=10 00:18:16.850 SPDK_VAGRANT_VMRAM=12288 00:18:16.850 SPDK_VAGRANT_PROVIDER=libvirt 00:18:16.850 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:18:16.850 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:18:16.850 SPDK_OPENSTACK_NETWORK=0 00:18:16.850 VAGRANT_PACKAGE_BOX=0 00:18:16.850 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:18:16.850 FORCE_DISTRO=true 00:18:16.850 VAGRANT_BOX_VERSION= 00:18:16.850 EXTRA_VAGRANTFILES= 00:18:16.850 NIC_MODEL=e1000 00:18:16.850 00:18:16.850 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:18:16.850 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:18:21.035 Bringing machine 'default' up with 'libvirt' provider... 00:18:21.035 ==> default: Creating image (snapshot of base box volume). 00:18:21.293 ==> default: Creating domain with the following settings... 00:18:21.293 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733467493_0d546984807fc703a0fa 00:18:21.293 ==> default: -- Domain type: kvm 00:18:21.293 ==> default: -- Cpus: 10 00:18:21.293 ==> default: -- Feature: acpi 00:18:21.293 ==> default: -- Feature: apic 00:18:21.293 ==> default: -- Feature: pae 00:18:21.293 ==> default: -- Memory: 12288M 00:18:21.293 ==> default: -- Memory Backing: hugepages: 00:18:21.293 ==> default: -- Management MAC: 00:18:21.293 ==> default: -- Loader: 00:18:21.293 ==> default: -- Nvram: 00:18:21.293 ==> default: -- Base box: spdk/fedora39 00:18:21.293 ==> default: -- Storage pool: default 00:18:21.293 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733467493_0d546984807fc703a0fa.img (20G) 00:18:21.293 ==> default: -- Volume Cache: default 00:18:21.293 ==> default: -- Kernel: 00:18:21.293 ==> default: -- Initrd: 00:18:21.293 ==> default: -- Graphics Type: vnc 00:18:21.293 ==> default: -- Graphics Port: -1 00:18:21.293 ==> default: -- Graphics IP: 127.0.0.1 00:18:21.293 ==> default: -- Graphics Password: Not defined 00:18:21.293 ==> default: -- Video Type: cirrus 00:18:21.293 ==> default: -- Video VRAM: 9216 00:18:21.293 ==> default: -- Sound Type: 00:18:21.293 ==> default: -- Keymap: en-us 00:18:21.293 ==> default: -- TPM Path: 00:18:21.293 ==> default: -- INPUT: type=mouse, bus=ps2 00:18:21.293 ==> default: -- Command line args: 00:18:21.293 ==> default: -> value=-device, 00:18:21.293 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:18:21.293 ==> default: -> value=-drive, 00:18:21.293 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:18:21.293 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:18:21.294 ==> default: -> value=-drive, 00:18:21.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:18:21.294 ==> default: -> value=-drive, 00:18:21.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:21.294 ==> default: -> value=-drive, 00:18:21.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:21.294 ==> default: -> value=-drive, 00:18:21.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:18:21.294 ==> default: -> value=-drive, 00:18:21.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:18:21.294 ==> default: -> value=-device, 00:18:21.294 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:21.294 ==> default: Creating shared folders metadata... 00:18:21.294 ==> default: Starting domain. 00:18:22.677 ==> default: Waiting for domain to get an IP address... 00:18:40.803 ==> default: Waiting for SSH to become available... 00:18:40.803 ==> default: Configuring and enabling network interfaces... 00:18:43.336 default: SSH address: 192.168.121.92:22 00:18:43.336 default: SSH username: vagrant 00:18:43.336 default: SSH auth method: private key 00:18:45.236 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:18:53.347 ==> default: Mounting SSHFS shared folder... 00:18:54.746 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:18:54.746 ==> default: Checking Mount.. 00:18:55.692 ==> default: Folder Successfully Mounted! 00:18:55.692 ==> default: Running provisioner: file... 00:18:56.627 default: ~/.gitconfig => .gitconfig 00:18:56.885 00:18:56.885 SUCCESS! 00:18:56.885 00:18:56.885 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:18:56.885 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:18:56.885 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:18:56.885 00:18:56.893 [Pipeline] } 00:18:56.908 [Pipeline] // stage 00:18:56.916 [Pipeline] dir 00:18:56.916 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:18:56.918 [Pipeline] { 00:18:56.930 [Pipeline] catchError 00:18:56.931 [Pipeline] { 00:18:56.944 [Pipeline] sh 00:18:57.226 + vagrant ssh-config --host vagrant 00:18:57.226 + sed -ne /^Host/,$p 00:18:57.226 + tee ssh_conf 00:19:01.410 Host vagrant 00:19:01.410 HostName 192.168.121.92 00:19:01.410 User vagrant 00:19:01.410 Port 22 00:19:01.410 UserKnownHostsFile /dev/null 00:19:01.410 StrictHostKeyChecking no 00:19:01.411 PasswordAuthentication no 00:19:01.411 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:19:01.411 IdentitiesOnly yes 00:19:01.411 LogLevel FATAL 00:19:01.411 ForwardAgent yes 00:19:01.411 ForwardX11 yes 00:19:01.411 00:19:01.422 [Pipeline] withEnv 00:19:01.424 [Pipeline] { 00:19:01.438 [Pipeline] sh 00:19:01.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:19:01.733 source /etc/os-release 00:19:01.733 [[ -e /image.version ]] && img=$(< /image.version) 00:19:01.733 # Minimal, systemd-like check. 00:19:01.733 if [[ -e /.dockerenv ]]; then 00:19:01.733 # Clear garbage from the node's name: 00:19:01.733 # agt-er_autotest_547-896 -> autotest_547-896 00:19:01.733 # $HOSTNAME is the actual container id 00:19:01.733 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:19:01.733 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:19:01.733 # We can assume this is a mount from a host where container is running, 00:19:01.733 # so fetch its hostname to easily identify the target swarm worker. 00:19:01.733 container="$(< /etc/hostname) ($agent)" 00:19:01.733 else 00:19:01.733 # Fallback 00:19:01.733 container=$agent 00:19:01.733 fi 00:19:01.733 fi 00:19:01.733 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:19:01.733 00:19:01.743 [Pipeline] } 00:19:01.758 [Pipeline] // withEnv 00:19:01.765 [Pipeline] setCustomBuildProperty 00:19:01.778 [Pipeline] stage 00:19:01.780 [Pipeline] { (Tests) 00:19:01.796 [Pipeline] sh 00:19:02.073 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:19:02.343 [Pipeline] sh 00:19:02.632 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:19:02.643 [Pipeline] timeout 00:19:02.644 Timeout set to expire in 50 min 00:19:02.645 [Pipeline] { 00:19:02.654 [Pipeline] sh 00:19:02.926 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:19:03.492 HEAD is now at f501a7223 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:19:03.505 [Pipeline] sh 00:19:03.785 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:19:04.058 [Pipeline] sh 00:19:04.336 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:19:04.624 [Pipeline] sh 00:19:04.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:19:04.904 ++ readlink -f spdk_repo 00:19:05.162 + DIR_ROOT=/home/vagrant/spdk_repo 00:19:05.162 + [[ -n /home/vagrant/spdk_repo ]] 00:19:05.162 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:19:05.162 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:19:05.162 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:19:05.162 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:19:05.162 + [[ -d /home/vagrant/spdk_repo/output ]] 00:19:05.162 + [[ nvme-vg-autotest == pkgdep-* ]] 00:19:05.162 + cd /home/vagrant/spdk_repo 00:19:05.162 + source /etc/os-release 00:19:05.162 ++ NAME='Fedora Linux' 00:19:05.162 ++ VERSION='39 (Cloud Edition)' 00:19:05.162 ++ ID=fedora 00:19:05.162 ++ VERSION_ID=39 00:19:05.162 ++ VERSION_CODENAME= 00:19:05.162 ++ PLATFORM_ID=platform:f39 00:19:05.162 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:19:05.162 ++ ANSI_COLOR='0;38;2;60;110;180' 00:19:05.162 ++ LOGO=fedora-logo-icon 00:19:05.162 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:19:05.162 ++ HOME_URL=https://fedoraproject.org/ 00:19:05.162 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:19:05.162 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:19:05.162 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:19:05.162 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:19:05.162 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:19:05.162 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:19:05.162 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:19:05.162 ++ SUPPORT_END=2024-11-12 00:19:05.162 ++ VARIANT='Cloud Edition' 00:19:05.162 ++ VARIANT_ID=cloud 00:19:05.162 + uname -a 00:19:05.162 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:19:05.162 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:19:05.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:05.679 Hugepages 00:19:05.679 node hugesize free / total 00:19:05.679 node0 1048576kB 0 / 0 00:19:05.679 node0 2048kB 0 / 0 00:19:05.679 00:19:05.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:05.679 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:19:05.679 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:19:05.679 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:19:05.679 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:19:05.679 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:19:05.938 + rm -f /tmp/spdk-ld-path 00:19:05.938 + source autorun-spdk.conf 00:19:05.938 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:19:05.938 ++ SPDK_TEST_NVME=1 00:19:05.938 ++ SPDK_TEST_FTL=1 00:19:05.938 ++ SPDK_TEST_ISAL=1 00:19:05.938 ++ SPDK_RUN_ASAN=1 00:19:05.938 ++ SPDK_RUN_UBSAN=1 00:19:05.938 ++ SPDK_TEST_XNVME=1 00:19:05.938 ++ SPDK_TEST_NVME_FDP=1 00:19:05.938 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:05.938 ++ RUN_NIGHTLY=0 00:19:05.938 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:19:05.938 + [[ -n '' ]] 00:19:05.938 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:19:05.938 + for M in /var/spdk/build-*-manifest.txt 00:19:05.938 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:19:05.938 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:19:05.938 + for M in /var/spdk/build-*-manifest.txt 00:19:05.938 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:19:05.938 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:19:05.938 + for M in /var/spdk/build-*-manifest.txt 00:19:05.938 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:19:05.938 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:19:05.938 ++ uname 00:19:05.938 + [[ Linux == \L\i\n\u\x ]] 00:19:05.938 + sudo dmesg -T 00:19:05.938 + sudo dmesg --clear 00:19:05.938 + dmesg_pid=5290 00:19:05.938 + [[ Fedora Linux == FreeBSD ]] 00:19:05.938 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:05.938 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:05.938 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:19:05.938 + sudo dmesg -Tw 00:19:05.938 + [[ -x /usr/src/fio-static/fio ]] 00:19:05.938 + export FIO_BIN=/usr/src/fio-static/fio 00:19:05.938 + FIO_BIN=/usr/src/fio-static/fio 00:19:05.938 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:19:05.938 + [[ ! -v VFIO_QEMU_BIN ]] 00:19:05.938 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:19:05.938 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:05.938 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:05.938 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:19:05.938 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:05.938 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:05.938 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:05.938 06:45:38 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:19:05.938 06:45:38 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:05.938 06:45:38 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:19:05.938 06:45:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:19:05.938 06:45:38 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:05.938 06:45:38 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:19:05.938 06:45:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.938 06:45:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:19:05.938 06:45:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:05.938 06:45:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.938 06:45:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.938 06:45:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.938 06:45:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.938 06:45:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.938 06:45:38 -- paths/export.sh@5 -- $ export PATH 00:19:05.938 06:45:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.938 06:45:38 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:05.938 06:45:38 -- common/autobuild_common.sh@493 -- $ date +%s 00:19:06.196 06:45:38 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733467538.XXXXXX 00:19:06.196 06:45:38 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733467538.VMAOz6 00:19:06.196 06:45:38 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:19:06.196 06:45:38 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:19:06.196 06:45:38 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:06.196 06:45:38 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:06.196 06:45:38 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:06.196 06:45:38 -- common/autobuild_common.sh@509 -- $ get_config_params 00:19:06.196 06:45:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:19:06.196 06:45:38 -- common/autotest_common.sh@10 -- $ set +x 00:19:06.196 06:45:38 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:19:06.196 06:45:38 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:19:06.196 06:45:38 -- pm/common@17 -- $ local monitor 00:19:06.196 06:45:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:06.196 06:45:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:06.196 06:45:38 -- pm/common@25 -- $ sleep 1 00:19:06.196 06:45:38 -- pm/common@21 -- $ date +%s 00:19:06.196 06:45:38 -- pm/common@21 -- $ date +%s 00:19:06.196 06:45:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733467538 00:19:06.196 06:45:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733467538 00:19:06.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733467538_collect-vmstat.pm.log 00:19:06.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733467538_collect-cpu-load.pm.log 00:19:07.138 06:45:39 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:19:07.138 06:45:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:19:07.138 06:45:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:19:07.138 06:45:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:07.138 06:45:39 -- spdk/autobuild.sh@16 -- $ date -u 00:19:07.138 Fri Dec 6 06:45:39 AM UTC 2024 00:19:07.138 06:45:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:19:07.138 v25.01-pre-305-gf501a7223 00:19:07.138 06:45:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:19:07.138 06:45:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:19:07.138 06:45:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:19:07.138 06:45:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:19:07.138 06:45:39 -- common/autotest_common.sh@10 -- $ set +x 00:19:07.138 ************************************ 00:19:07.138 START TEST asan 00:19:07.138 ************************************ 00:19:07.138 using asan 00:19:07.138 06:45:39 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:19:07.138 00:19:07.138 real 0m0.000s 00:19:07.138 user 0m0.000s 00:19:07.138 sys 0m0.000s 00:19:07.138 06:45:39 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:19:07.138 06:45:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:19:07.138 ************************************ 00:19:07.138 END TEST asan 00:19:07.138 ************************************ 00:19:07.138 06:45:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:19:07.138 06:45:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:19:07.138 06:45:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:19:07.138 06:45:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:19:07.138 06:45:39 -- common/autotest_common.sh@10 -- $ set +x 00:19:07.138 ************************************ 00:19:07.138 START TEST ubsan 00:19:07.138 ************************************ 00:19:07.138 using ubsan 00:19:07.138 06:45:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:19:07.138 00:19:07.138 real 0m0.000s 00:19:07.138 user 0m0.000s 00:19:07.138 sys 0m0.000s 00:19:07.138 06:45:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:19:07.138 06:45:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:19:07.138 ************************************ 00:19:07.138 END TEST ubsan 00:19:07.138 ************************************ 00:19:07.138 06:45:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:19:07.139 06:45:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:19:07.139 06:45:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:19:07.139 06:45:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:19:07.397 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:07.397 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:07.656 Using 'verbs' RDMA provider 00:19:21.256 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:19:33.490 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:19:33.748 Creating mk/config.mk...done. 00:19:33.748 Creating mk/cc.flags.mk...done. 00:19:33.748 Type 'make' to build. 00:19:33.748 06:46:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:19:33.748 06:46:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:19:33.748 06:46:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:19:33.748 06:46:06 -- common/autotest_common.sh@10 -- $ set +x 00:19:33.748 ************************************ 00:19:33.748 START TEST make 00:19:33.748 ************************************ 00:19:33.748 06:46:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:19:34.006 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:19:34.006 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:19:34.006 meson setup builddir \ 00:19:34.006 -Dwith-libaio=enabled \ 00:19:34.006 -Dwith-liburing=enabled \ 00:19:34.006 -Dwith-libvfn=disabled \ 00:19:34.006 -Dwith-spdk=disabled \ 00:19:34.006 -Dexamples=false \ 00:19:34.006 -Dtests=false \ 00:19:34.006 -Dtools=false && \ 00:19:34.006 meson compile -C builddir && \ 00:19:34.006 cd -) 00:19:34.006 make[1]: Nothing to be done for 'all'. 00:19:36.596 The Meson build system 00:19:36.596 Version: 1.5.0 00:19:36.596 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:19:36.596 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:19:36.596 Build type: native build 00:19:36.596 Project name: xnvme 00:19:36.596 Project version: 0.7.5 00:19:36.596 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:19:36.596 C linker for the host machine: cc ld.bfd 2.40-14 00:19:36.596 Host machine cpu family: x86_64 00:19:36.596 Host machine cpu: x86_64 00:19:36.596 Message: host_machine.system: linux 00:19:36.596 Compiler for C supports arguments -Wno-missing-braces: YES 00:19:36.596 Compiler for C supports arguments -Wno-cast-function-type: YES 00:19:36.596 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:19:36.596 Run-time dependency threads found: YES 00:19:36.596 Has header "setupapi.h" : NO 00:19:36.596 Has header "linux/blkzoned.h" : YES 00:19:36.596 Has header "linux/blkzoned.h" : YES (cached) 00:19:36.596 Has header "libaio.h" : YES 00:19:36.596 Library aio found: YES 00:19:36.596 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:19:36.596 Run-time dependency liburing found: YES 2.2 00:19:36.596 Dependency libvfn skipped: feature with-libvfn disabled 00:19:36.596 Found CMake: /usr/bin/cmake (3.27.7) 00:19:36.596 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:19:36.596 Subproject spdk : skipped: feature with-spdk disabled 00:19:36.596 Run-time dependency appleframeworks found: NO (tried framework) 00:19:36.596 Run-time dependency appleframeworks found: NO (tried framework) 00:19:36.596 Library rt found: YES 00:19:36.596 Checking for function "clock_gettime" with dependency -lrt: YES 00:19:36.596 Configuring xnvme_config.h using configuration 00:19:36.596 Configuring xnvme.spec using configuration 00:19:36.596 Run-time dependency bash-completion found: YES 2.11 00:19:36.596 Message: Bash-completions: /usr/share/bash-completion/completions 00:19:36.596 Program cp found: YES (/usr/bin/cp) 00:19:36.596 Build targets in project: 3 00:19:36.596 00:19:36.596 xnvme 0.7.5 00:19:36.596 00:19:36.596 Subprojects 00:19:36.596 spdk : NO Feature 'with-spdk' disabled 00:19:36.596 00:19:36.596 User defined options 00:19:36.596 examples : false 00:19:36.596 tests : false 00:19:36.596 tools : false 00:19:36.596 with-libaio : enabled 00:19:36.596 with-liburing: enabled 00:19:36.596 with-libvfn : disabled 00:19:36.596 with-spdk : disabled 00:19:36.596 00:19:36.596 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:19:37.162 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:19:37.162 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:19:37.162 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:19:37.162 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:19:37.419 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:19:37.419 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:19:37.419 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:19:37.419 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:19:37.419 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:19:37.419 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:19:37.419 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:19:37.419 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:19:37.419 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:19:37.419 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:19:37.419 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:19:37.419 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:19:37.419 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:19:37.419 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:19:37.419 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:19:37.419 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:19:37.419 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:19:37.419 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:19:37.419 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:19:37.419 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:19:37.677 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:19:37.677 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:19:37.677 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:19:37.677 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:19:37.677 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:19:37.677 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:19:37.677 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:19:37.677 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:19:37.677 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:19:37.677 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:19:37.677 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:19:37.677 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:19:37.677 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:19:37.677 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:19:37.677 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:19:37.677 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:19:37.677 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:19:37.677 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:19:37.677 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:19:37.677 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:19:37.677 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:19:37.677 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:19:37.677 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:19:37.677 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:19:37.677 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:19:37.677 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:19:37.677 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:19:37.677 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:19:37.677 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:19:37.935 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:19:37.935 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:19:37.935 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:19:37.935 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:19:37.935 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:19:37.935 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:19:37.935 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:19:37.935 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:19:37.935 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:19:37.935 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:19:37.935 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:19:37.935 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:19:37.935 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:19:37.935 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:19:37.935 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:19:37.936 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:19:38.193 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:19:38.193 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:19:38.194 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:19:38.194 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:19:38.194 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:19:38.759 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:19:38.759 [75/76] Linking static target lib/libxnvme.a 00:19:38.759 [76/76] Linking target lib/libxnvme.so.0.7.5 00:19:38.759 INFO: autodetecting backend as ninja 00:19:38.759 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:19:38.759 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:19:48.729 The Meson build system 00:19:48.729 Version: 1.5.0 00:19:48.729 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:19:48.729 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:19:48.729 Build type: native build 00:19:48.729 Program cat found: YES (/usr/bin/cat) 00:19:48.729 Project name: DPDK 00:19:48.729 Project version: 24.03.0 00:19:48.729 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:19:48.729 C linker for the host machine: cc ld.bfd 2.40-14 00:19:48.729 Host machine cpu family: x86_64 00:19:48.729 Host machine cpu: x86_64 00:19:48.729 Message: ## Building in Developer Mode ## 00:19:48.729 Program pkg-config found: YES (/usr/bin/pkg-config) 00:19:48.729 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:19:48.729 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:19:48.729 Program python3 found: YES (/usr/bin/python3) 00:19:48.729 Program cat found: YES (/usr/bin/cat) 00:19:48.729 Compiler for C supports arguments -march=native: YES 00:19:48.729 Checking for size of "void *" : 8 00:19:48.729 Checking for size of "void *" : 8 (cached) 00:19:48.729 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:19:48.729 Library m found: YES 00:19:48.729 Library numa found: YES 00:19:48.729 Has header "numaif.h" : YES 00:19:48.729 Library fdt found: NO 00:19:48.729 Library execinfo found: NO 00:19:48.729 Has header "execinfo.h" : YES 00:19:48.729 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:19:48.729 Run-time dependency libarchive found: NO (tried pkgconfig) 00:19:48.729 Run-time dependency libbsd found: NO (tried pkgconfig) 00:19:48.729 Run-time dependency jansson found: NO (tried pkgconfig) 00:19:48.729 Run-time dependency openssl found: YES 3.1.1 00:19:48.729 Run-time dependency libpcap found: YES 1.10.4 00:19:48.729 Has header "pcap.h" with dependency libpcap: YES 00:19:48.729 Compiler for C supports arguments -Wcast-qual: YES 00:19:48.729 Compiler for C supports arguments -Wdeprecated: YES 00:19:48.729 Compiler for C supports arguments -Wformat: YES 00:19:48.729 Compiler for C supports arguments -Wformat-nonliteral: NO 00:19:48.729 Compiler for C supports arguments -Wformat-security: NO 00:19:48.729 Compiler for C supports arguments -Wmissing-declarations: YES 00:19:48.729 Compiler for C supports arguments -Wmissing-prototypes: YES 00:19:48.729 Compiler for C supports arguments -Wnested-externs: YES 00:19:48.729 Compiler for C supports arguments -Wold-style-definition: YES 00:19:48.729 Compiler for C supports arguments -Wpointer-arith: YES 00:19:48.729 Compiler for C supports arguments -Wsign-compare: YES 00:19:48.729 Compiler for C supports arguments -Wstrict-prototypes: YES 00:19:48.729 Compiler for C supports arguments -Wundef: YES 00:19:48.729 Compiler for C supports arguments -Wwrite-strings: YES 00:19:48.729 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:19:48.729 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:19:48.729 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:19:48.729 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:19:48.729 Program objdump found: YES (/usr/bin/objdump) 00:19:48.729 Compiler for C supports arguments -mavx512f: YES 00:19:48.729 Checking if "AVX512 checking" compiles: YES 00:19:48.729 Fetching value of define "__SSE4_2__" : 1 00:19:48.729 Fetching value of define "__AES__" : 1 00:19:48.729 Fetching value of define "__AVX__" : 1 00:19:48.729 Fetching value of define "__AVX2__" : 1 00:19:48.729 Fetching value of define "__AVX512BW__" : (undefined) 00:19:48.729 Fetching value of define "__AVX512CD__" : (undefined) 00:19:48.729 Fetching value of define "__AVX512DQ__" : (undefined) 00:19:48.729 Fetching value of define "__AVX512F__" : (undefined) 00:19:48.729 Fetching value of define "__AVX512VL__" : (undefined) 00:19:48.729 Fetching value of define "__PCLMUL__" : 1 00:19:48.729 Fetching value of define "__RDRND__" : 1 00:19:48.729 Fetching value of define "__RDSEED__" : 1 00:19:48.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:19:48.729 Fetching value of define "__znver1__" : (undefined) 00:19:48.730 Fetching value of define "__znver2__" : (undefined) 00:19:48.730 Fetching value of define "__znver3__" : (undefined) 00:19:48.730 Fetching value of define "__znver4__" : (undefined) 00:19:48.730 Library asan found: YES 00:19:48.730 Compiler for C supports arguments -Wno-format-truncation: YES 00:19:48.730 Message: lib/log: Defining dependency "log" 00:19:48.730 Message: lib/kvargs: Defining dependency "kvargs" 00:19:48.730 Message: lib/telemetry: Defining dependency "telemetry" 00:19:48.730 Library rt found: YES 00:19:48.730 Checking for function "getentropy" : NO 00:19:48.730 Message: lib/eal: Defining dependency "eal" 00:19:48.730 Message: lib/ring: Defining dependency "ring" 00:19:48.730 Message: lib/rcu: Defining dependency "rcu" 00:19:48.730 Message: lib/mempool: Defining dependency "mempool" 00:19:48.730 Message: lib/mbuf: Defining dependency "mbuf" 00:19:48.730 Fetching value of define "__PCLMUL__" : 1 (cached) 00:19:48.730 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:19:48.730 Compiler for C supports arguments -mpclmul: YES 00:19:48.730 Compiler for C supports arguments -maes: YES 00:19:48.730 Compiler for C supports arguments -mavx512f: YES (cached) 00:19:48.730 Compiler for C supports arguments -mavx512bw: YES 00:19:48.730 Compiler for C supports arguments -mavx512dq: YES 00:19:48.730 Compiler for C supports arguments -mavx512vl: YES 00:19:48.730 Compiler for C supports arguments -mvpclmulqdq: YES 00:19:48.730 Compiler for C supports arguments -mavx2: YES 00:19:48.730 Compiler for C supports arguments -mavx: YES 00:19:48.730 Message: lib/net: Defining dependency "net" 00:19:48.730 Message: lib/meter: Defining dependency "meter" 00:19:48.730 Message: lib/ethdev: Defining dependency "ethdev" 00:19:48.730 Message: lib/pci: Defining dependency "pci" 00:19:48.730 Message: lib/cmdline: Defining dependency "cmdline" 00:19:48.730 Message: lib/hash: Defining dependency "hash" 00:19:48.730 Message: lib/timer: Defining dependency "timer" 00:19:48.730 Message: lib/compressdev: Defining dependency "compressdev" 00:19:48.730 Message: lib/cryptodev: Defining dependency "cryptodev" 00:19:48.730 Message: lib/dmadev: Defining dependency "dmadev" 00:19:48.730 Compiler for C supports arguments -Wno-cast-qual: YES 00:19:48.730 Message: lib/power: Defining dependency "power" 00:19:48.730 Message: lib/reorder: Defining dependency "reorder" 00:19:48.730 Message: lib/security: Defining dependency "security" 00:19:48.730 Has header "linux/userfaultfd.h" : YES 00:19:48.730 Has header "linux/vduse.h" : YES 00:19:48.730 Message: lib/vhost: Defining dependency "vhost" 00:19:48.730 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:19:48.730 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:19:48.730 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:19:48.730 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:19:48.730 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:19:48.730 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:19:48.730 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:19:48.730 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:19:48.730 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:19:48.730 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:19:48.730 Program doxygen found: YES (/usr/local/bin/doxygen) 00:19:48.730 Configuring doxy-api-html.conf using configuration 00:19:48.730 Configuring doxy-api-man.conf using configuration 00:19:48.730 Program mandb found: YES (/usr/bin/mandb) 00:19:48.730 Program sphinx-build found: NO 00:19:48.730 Configuring rte_build_config.h using configuration 00:19:48.730 Message: 00:19:48.730 ================= 00:19:48.730 Applications Enabled 00:19:48.730 ================= 00:19:48.730 00:19:48.730 apps: 00:19:48.730 00:19:48.730 00:19:48.730 Message: 00:19:48.730 ================= 00:19:48.730 Libraries Enabled 00:19:48.730 ================= 00:19:48.730 00:19:48.730 libs: 00:19:48.730 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:19:48.730 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:19:48.730 cryptodev, dmadev, power, reorder, security, vhost, 00:19:48.730 00:19:48.730 Message: 00:19:48.730 =============== 00:19:48.730 Drivers Enabled 00:19:48.730 =============== 00:19:48.730 00:19:48.730 common: 00:19:48.730 00:19:48.730 bus: 00:19:48.730 pci, vdev, 00:19:48.730 mempool: 00:19:48.730 ring, 00:19:48.730 dma: 00:19:48.730 00:19:48.730 net: 00:19:48.730 00:19:48.730 crypto: 00:19:48.730 00:19:48.730 compress: 00:19:48.730 00:19:48.730 vdpa: 00:19:48.730 00:19:48.730 00:19:48.730 Message: 00:19:48.730 ================= 00:19:48.730 Content Skipped 00:19:48.730 ================= 00:19:48.730 00:19:48.730 apps: 00:19:48.730 dumpcap: explicitly disabled via build config 00:19:48.730 graph: explicitly disabled via build config 00:19:48.730 pdump: explicitly disabled via build config 00:19:48.730 proc-info: explicitly disabled via build config 00:19:48.730 test-acl: explicitly disabled via build config 00:19:48.730 test-bbdev: explicitly disabled via build config 00:19:48.730 test-cmdline: explicitly disabled via build config 00:19:48.730 test-compress-perf: explicitly disabled via build config 00:19:48.730 test-crypto-perf: explicitly disabled via build config 00:19:48.730 test-dma-perf: explicitly disabled via build config 00:19:48.730 test-eventdev: explicitly disabled via build config 00:19:48.730 test-fib: explicitly disabled via build config 00:19:48.730 test-flow-perf: explicitly disabled via build config 00:19:48.730 test-gpudev: explicitly disabled via build config 00:19:48.730 test-mldev: explicitly disabled via build config 00:19:48.730 test-pipeline: explicitly disabled via build config 00:19:48.730 test-pmd: explicitly disabled via build config 00:19:48.730 test-regex: explicitly disabled via build config 00:19:48.730 test-sad: explicitly disabled via build config 00:19:48.730 test-security-perf: explicitly disabled via build config 00:19:48.730 00:19:48.730 libs: 00:19:48.730 argparse: explicitly disabled via build config 00:19:48.730 metrics: explicitly disabled via build config 00:19:48.730 acl: explicitly disabled via build config 00:19:48.730 bbdev: explicitly disabled via build config 00:19:48.730 bitratestats: explicitly disabled via build config 00:19:48.730 bpf: explicitly disabled via build config 00:19:48.730 cfgfile: explicitly disabled via build config 00:19:48.730 distributor: explicitly disabled via build config 00:19:48.730 efd: explicitly disabled via build config 00:19:48.730 eventdev: explicitly disabled via build config 00:19:48.730 dispatcher: explicitly disabled via build config 00:19:48.730 gpudev: explicitly disabled via build config 00:19:48.730 gro: explicitly disabled via build config 00:19:48.730 gso: explicitly disabled via build config 00:19:48.730 ip_frag: explicitly disabled via build config 00:19:48.730 jobstats: explicitly disabled via build config 00:19:48.730 latencystats: explicitly disabled via build config 00:19:48.730 lpm: explicitly disabled via build config 00:19:48.730 member: explicitly disabled via build config 00:19:48.730 pcapng: explicitly disabled via build config 00:19:48.730 rawdev: explicitly disabled via build config 00:19:48.730 regexdev: explicitly disabled via build config 00:19:48.730 mldev: explicitly disabled via build config 00:19:48.730 rib: explicitly disabled via build config 00:19:48.730 sched: explicitly disabled via build config 00:19:48.730 stack: explicitly disabled via build config 00:19:48.730 ipsec: explicitly disabled via build config 00:19:48.730 pdcp: explicitly disabled via build config 00:19:48.730 fib: explicitly disabled via build config 00:19:48.730 port: explicitly disabled via build config 00:19:48.730 pdump: explicitly disabled via build config 00:19:48.730 table: explicitly disabled via build config 00:19:48.730 pipeline: explicitly disabled via build config 00:19:48.730 graph: explicitly disabled via build config 00:19:48.730 node: explicitly disabled via build config 00:19:48.730 00:19:48.730 drivers: 00:19:48.730 common/cpt: not in enabled drivers build config 00:19:48.730 common/dpaax: not in enabled drivers build config 00:19:48.730 common/iavf: not in enabled drivers build config 00:19:48.730 common/idpf: not in enabled drivers build config 00:19:48.730 common/ionic: not in enabled drivers build config 00:19:48.730 common/mvep: not in enabled drivers build config 00:19:48.730 common/octeontx: not in enabled drivers build config 00:19:48.730 bus/auxiliary: not in enabled drivers build config 00:19:48.730 bus/cdx: not in enabled drivers build config 00:19:48.730 bus/dpaa: not in enabled drivers build config 00:19:48.730 bus/fslmc: not in enabled drivers build config 00:19:48.730 bus/ifpga: not in enabled drivers build config 00:19:48.730 bus/platform: not in enabled drivers build config 00:19:48.730 bus/uacce: not in enabled drivers build config 00:19:48.730 bus/vmbus: not in enabled drivers build config 00:19:48.730 common/cnxk: not in enabled drivers build config 00:19:48.730 common/mlx5: not in enabled drivers build config 00:19:48.730 common/nfp: not in enabled drivers build config 00:19:48.730 common/nitrox: not in enabled drivers build config 00:19:48.730 common/qat: not in enabled drivers build config 00:19:48.730 common/sfc_efx: not in enabled drivers build config 00:19:48.730 mempool/bucket: not in enabled drivers build config 00:19:48.730 mempool/cnxk: not in enabled drivers build config 00:19:48.730 mempool/dpaa: not in enabled drivers build config 00:19:48.730 mempool/dpaa2: not in enabled drivers build config 00:19:48.730 mempool/octeontx: not in enabled drivers build config 00:19:48.730 mempool/stack: not in enabled drivers build config 00:19:48.730 dma/cnxk: not in enabled drivers build config 00:19:48.730 dma/dpaa: not in enabled drivers build config 00:19:48.730 dma/dpaa2: not in enabled drivers build config 00:19:48.731 dma/hisilicon: not in enabled drivers build config 00:19:48.731 dma/idxd: not in enabled drivers build config 00:19:48.731 dma/ioat: not in enabled drivers build config 00:19:48.731 dma/skeleton: not in enabled drivers build config 00:19:48.731 net/af_packet: not in enabled drivers build config 00:19:48.731 net/af_xdp: not in enabled drivers build config 00:19:48.731 net/ark: not in enabled drivers build config 00:19:48.731 net/atlantic: not in enabled drivers build config 00:19:48.731 net/avp: not in enabled drivers build config 00:19:48.731 net/axgbe: not in enabled drivers build config 00:19:48.731 net/bnx2x: not in enabled drivers build config 00:19:48.731 net/bnxt: not in enabled drivers build config 00:19:48.731 net/bonding: not in enabled drivers build config 00:19:48.731 net/cnxk: not in enabled drivers build config 00:19:48.731 net/cpfl: not in enabled drivers build config 00:19:48.731 net/cxgbe: not in enabled drivers build config 00:19:48.731 net/dpaa: not in enabled drivers build config 00:19:48.731 net/dpaa2: not in enabled drivers build config 00:19:48.731 net/e1000: not in enabled drivers build config 00:19:48.731 net/ena: not in enabled drivers build config 00:19:48.731 net/enetc: not in enabled drivers build config 00:19:48.731 net/enetfec: not in enabled drivers build config 00:19:48.731 net/enic: not in enabled drivers build config 00:19:48.731 net/failsafe: not in enabled drivers build config 00:19:48.731 net/fm10k: not in enabled drivers build config 00:19:48.731 net/gve: not in enabled drivers build config 00:19:48.731 net/hinic: not in enabled drivers build config 00:19:48.731 net/hns3: not in enabled drivers build config 00:19:48.731 net/i40e: not in enabled drivers build config 00:19:48.731 net/iavf: not in enabled drivers build config 00:19:48.731 net/ice: not in enabled drivers build config 00:19:48.731 net/idpf: not in enabled drivers build config 00:19:48.731 net/igc: not in enabled drivers build config 00:19:48.731 net/ionic: not in enabled drivers build config 00:19:48.731 net/ipn3ke: not in enabled drivers build config 00:19:48.731 net/ixgbe: not in enabled drivers build config 00:19:48.731 net/mana: not in enabled drivers build config 00:19:48.731 net/memif: not in enabled drivers build config 00:19:48.731 net/mlx4: not in enabled drivers build config 00:19:48.731 net/mlx5: not in enabled drivers build config 00:19:48.731 net/mvneta: not in enabled drivers build config 00:19:48.731 net/mvpp2: not in enabled drivers build config 00:19:48.731 net/netvsc: not in enabled drivers build config 00:19:48.731 net/nfb: not in enabled drivers build config 00:19:48.731 net/nfp: not in enabled drivers build config 00:19:48.731 net/ngbe: not in enabled drivers build config 00:19:48.731 net/null: not in enabled drivers build config 00:19:48.731 net/octeontx: not in enabled drivers build config 00:19:48.731 net/octeon_ep: not in enabled drivers build config 00:19:48.731 net/pcap: not in enabled drivers build config 00:19:48.731 net/pfe: not in enabled drivers build config 00:19:48.731 net/qede: not in enabled drivers build config 00:19:48.731 net/ring: not in enabled drivers build config 00:19:48.731 net/sfc: not in enabled drivers build config 00:19:48.731 net/softnic: not in enabled drivers build config 00:19:48.731 net/tap: not in enabled drivers build config 00:19:48.731 net/thunderx: not in enabled drivers build config 00:19:48.731 net/txgbe: not in enabled drivers build config 00:19:48.731 net/vdev_netvsc: not in enabled drivers build config 00:19:48.731 net/vhost: not in enabled drivers build config 00:19:48.731 net/virtio: not in enabled drivers build config 00:19:48.731 net/vmxnet3: not in enabled drivers build config 00:19:48.731 raw/*: missing internal dependency, "rawdev" 00:19:48.731 crypto/armv8: not in enabled drivers build config 00:19:48.731 crypto/bcmfs: not in enabled drivers build config 00:19:48.731 crypto/caam_jr: not in enabled drivers build config 00:19:48.731 crypto/ccp: not in enabled drivers build config 00:19:48.731 crypto/cnxk: not in enabled drivers build config 00:19:48.731 crypto/dpaa_sec: not in enabled drivers build config 00:19:48.731 crypto/dpaa2_sec: not in enabled drivers build config 00:19:48.731 crypto/ipsec_mb: not in enabled drivers build config 00:19:48.731 crypto/mlx5: not in enabled drivers build config 00:19:48.731 crypto/mvsam: not in enabled drivers build config 00:19:48.731 crypto/nitrox: not in enabled drivers build config 00:19:48.731 crypto/null: not in enabled drivers build config 00:19:48.731 crypto/octeontx: not in enabled drivers build config 00:19:48.731 crypto/openssl: not in enabled drivers build config 00:19:48.731 crypto/scheduler: not in enabled drivers build config 00:19:48.731 crypto/uadk: not in enabled drivers build config 00:19:48.731 crypto/virtio: not in enabled drivers build config 00:19:48.731 compress/isal: not in enabled drivers build config 00:19:48.731 compress/mlx5: not in enabled drivers build config 00:19:48.731 compress/nitrox: not in enabled drivers build config 00:19:48.731 compress/octeontx: not in enabled drivers build config 00:19:48.731 compress/zlib: not in enabled drivers build config 00:19:48.731 regex/*: missing internal dependency, "regexdev" 00:19:48.731 ml/*: missing internal dependency, "mldev" 00:19:48.731 vdpa/ifc: not in enabled drivers build config 00:19:48.731 vdpa/mlx5: not in enabled drivers build config 00:19:48.731 vdpa/nfp: not in enabled drivers build config 00:19:48.731 vdpa/sfc: not in enabled drivers build config 00:19:48.731 event/*: missing internal dependency, "eventdev" 00:19:48.731 baseband/*: missing internal dependency, "bbdev" 00:19:48.731 gpu/*: missing internal dependency, "gpudev" 00:19:48.731 00:19:48.731 00:19:48.731 Build targets in project: 85 00:19:48.731 00:19:48.731 DPDK 24.03.0 00:19:48.731 00:19:48.731 User defined options 00:19:48.731 buildtype : debug 00:19:48.731 default_library : shared 00:19:48.731 libdir : lib 00:19:48.731 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:48.731 b_sanitize : address 00:19:48.731 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:19:48.731 c_link_args : 00:19:48.731 cpu_instruction_set: native 00:19:48.731 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:19:48.731 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:19:48.731 enable_docs : false 00:19:48.731 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:19:48.731 enable_kmods : false 00:19:48.731 max_lcores : 128 00:19:48.731 tests : false 00:19:48.731 00:19:48.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:19:48.731 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:19:48.731 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:19:48.731 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:19:48.731 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:19:48.731 [4/268] Linking static target lib/librte_kvargs.a 00:19:48.731 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:19:48.731 [6/268] Linking static target lib/librte_log.a 00:19:48.990 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.247 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:19:49.247 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:19:49.247 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:19:49.247 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:19:49.505 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:19:49.505 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:19:49.505 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:19:49.505 [15/268] Linking static target lib/librte_telemetry.a 00:19:49.505 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:19:49.505 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:19:49.505 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:19:49.763 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:19:49.763 [20/268] Linking target lib/librte_log.so.24.1 00:19:50.022 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:19:50.022 [22/268] Linking target lib/librte_kvargs.so.24.1 00:19:50.281 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:19:50.281 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:19:50.281 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:19:50.281 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:19:50.281 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:19:50.281 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.281 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:19:50.539 [30/268] Linking target lib/librte_telemetry.so.24.1 00:19:50.540 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:19:50.540 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:19:50.799 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:19:50.799 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:19:50.799 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:19:51.059 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:19:51.059 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:19:51.059 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:19:51.318 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:19:51.318 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:19:51.318 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:19:51.318 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:19:51.318 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:19:51.318 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:19:51.577 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:19:51.835 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:19:51.835 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:19:51.835 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:19:52.094 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:19:52.094 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:19:52.094 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:19:52.353 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:19:52.353 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:19:52.353 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:19:52.353 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:19:52.611 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:19:52.611 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:19:52.870 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:19:52.870 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:19:52.870 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:19:52.870 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:19:53.128 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:19:53.386 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:19:53.386 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:19:53.386 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:19:53.386 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:19:53.644 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:19:53.644 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:19:53.644 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:19:53.903 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:19:53.903 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:19:53.903 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:19:53.903 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:19:54.162 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:19:54.162 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:19:54.162 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:19:54.162 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:19:54.419 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:19:54.419 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:19:54.419 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:19:54.676 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:19:54.676 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:19:54.676 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:19:54.932 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:19:54.932 [85/268] Linking static target lib/librte_eal.a 00:19:54.932 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:19:54.932 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:19:54.932 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:19:54.932 [89/268] Linking static target lib/librte_rcu.a 00:19:54.932 [90/268] Linking static target lib/librte_ring.a 00:19:55.189 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:19:55.189 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:19:55.446 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:19:55.446 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:19:55.446 [95/268] Linking static target lib/librte_mempool.a 00:19:55.446 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.446 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.446 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:19:56.009 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:19:56.009 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:19:56.009 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:19:56.009 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:19:56.009 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:19:56.267 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:19:56.267 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:19:56.267 [106/268] Linking static target lib/librte_mbuf.a 00:19:56.267 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:19:56.832 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:19:56.832 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:19:56.832 [110/268] Linking static target lib/librte_meter.a 00:19:56.832 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:19:56.832 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:19:56.832 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:19:57.090 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:19:57.090 [115/268] Linking static target lib/librte_net.a 00:19:57.090 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:19:57.090 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:19:57.348 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:19:57.606 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:19:57.606 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:19:57.606 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:19:57.863 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:19:58.429 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:19:58.429 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:19:58.429 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:19:58.429 [126/268] Linking static target lib/librte_pci.a 00:19:58.686 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:19:58.686 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:19:58.686 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:19:58.686 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:19:58.686 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:19:58.945 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:19:58.945 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:19:58.945 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:19:58.945 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:19:58.945 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:59.204 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:19:59.204 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:19:59.204 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:19:59.204 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:19:59.204 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:19:59.204 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:19:59.204 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:19:59.204 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:19:59.462 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:19:59.462 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:19:59.462 [147/268] Linking static target lib/librte_cmdline.a 00:19:59.720 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:19:59.720 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:19:59.720 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:19:59.977 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:19:59.977 [152/268] Linking static target lib/librte_ethdev.a 00:19:59.977 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:19:59.977 [154/268] Linking static target lib/librte_timer.a 00:20:00.236 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:20:00.236 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:20:00.494 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:20:00.494 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:20:00.752 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:20:00.752 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:20:00.752 [161/268] Linking static target lib/librte_compressdev.a 00:20:00.752 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:20:01.011 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:20:01.011 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:20:01.011 [165/268] Linking static target lib/librte_hash.a 00:20:01.269 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:20:01.269 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:20:01.269 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:20:01.528 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:20:01.528 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:20:01.528 [171/268] Linking static target lib/librte_dmadev.a 00:20:01.528 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:20:01.790 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:20:01.790 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:02.049 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:20:02.311 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:20:02.311 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:20:02.311 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:20:02.311 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:02.570 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:20:02.570 [181/268] Linking static target lib/librte_cryptodev.a 00:20:02.570 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:20:02.570 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:20:02.570 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:20:02.828 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:20:02.828 [186/268] Linking static target lib/librte_power.a 00:20:03.394 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:20:03.394 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:20:03.394 [189/268] Linking static target lib/librte_reorder.a 00:20:03.394 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:20:03.652 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:20:03.652 [192/268] Linking static target lib/librte_security.a 00:20:03.652 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:20:03.910 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:20:04.168 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:20:04.426 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:20:04.426 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:20:04.684 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:20:04.684 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:20:04.942 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:20:05.200 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:20:05.200 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:20:05.200 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:20:05.200 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:05.459 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:20:05.720 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:20:05.979 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:20:05.979 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:20:05.979 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:20:05.979 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:20:05.979 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:20:06.238 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:20:06.238 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:20:06.238 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:20:06.238 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:20:06.238 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:20:06.238 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:20:06.238 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:20:06.238 [219/268] Linking static target drivers/librte_bus_vdev.a 00:20:06.498 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:20:06.498 [221/268] Linking static target drivers/librte_bus_pci.a 00:20:06.498 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:20:06.498 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:20:06.498 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:20:06.498 [225/268] Linking static target drivers/librte_mempool_ring.a 00:20:06.756 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:07.015 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:20:07.581 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:20:07.839 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:20:07.839 [230/268] Linking target lib/librte_eal.so.24.1 00:20:08.100 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:20:08.100 [232/268] Linking target lib/librte_ring.so.24.1 00:20:08.100 [233/268] Linking target lib/librte_pci.so.24.1 00:20:08.100 [234/268] Linking target lib/librte_meter.so.24.1 00:20:08.100 [235/268] Linking target lib/librte_timer.so.24.1 00:20:08.100 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:20:08.100 [237/268] Linking target lib/librte_dmadev.so.24.1 00:20:08.100 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:20:08.363 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:20:08.363 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:20:08.363 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:20:08.363 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:20:08.363 [243/268] Linking target lib/librte_mempool.so.24.1 00:20:08.363 [244/268] Linking target lib/librte_rcu.so.24.1 00:20:08.363 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:20:08.363 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:20:08.363 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:20:08.620 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:20:08.620 [249/268] Linking target lib/librte_mbuf.so.24.1 00:20:08.620 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:20:08.620 [251/268] Linking target lib/librte_reorder.so.24.1 00:20:08.620 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:20:08.620 [253/268] Linking target lib/librte_net.so.24.1 00:20:08.620 [254/268] Linking target lib/librte_compressdev.so.24.1 00:20:08.878 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:20:08.878 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:20:08.878 [257/268] Linking target lib/librte_security.so.24.1 00:20:08.878 [258/268] Linking target lib/librte_hash.so.24.1 00:20:08.878 [259/268] Linking target lib/librte_cmdline.so.24.1 00:20:08.878 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:09.136 [261/268] Linking target lib/librte_ethdev.so.24.1 00:20:09.136 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:20:09.136 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:20:09.394 [264/268] Linking target lib/librte_power.so.24.1 00:20:11.921 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:20:11.921 [266/268] Linking static target lib/librte_vhost.a 00:20:13.361 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:20:13.361 [268/268] Linking target lib/librte_vhost.so.24.1 00:20:13.361 INFO: autodetecting backend as ninja 00:20:13.361 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:20:35.295 CC lib/log/log.o 00:20:35.295 CC lib/log/log_flags.o 00:20:35.295 CC lib/ut/ut.o 00:20:35.295 CC lib/log/log_deprecated.o 00:20:35.295 CC lib/ut_mock/mock.o 00:20:35.295 LIB libspdk_ut.a 00:20:35.295 LIB libspdk_ut_mock.a 00:20:35.295 SO libspdk_ut.so.2.0 00:20:35.295 SO libspdk_ut_mock.so.6.0 00:20:35.295 LIB libspdk_log.a 00:20:35.295 SO libspdk_log.so.7.1 00:20:35.295 SYMLINK libspdk_ut.so 00:20:35.295 SYMLINK libspdk_ut_mock.so 00:20:35.295 SYMLINK libspdk_log.so 00:20:35.295 CC lib/dma/dma.o 00:20:35.295 CXX lib/trace_parser/trace.o 00:20:35.295 CC lib/util/bit_array.o 00:20:35.295 CC lib/util/base64.o 00:20:35.295 CC lib/util/cpuset.o 00:20:35.295 CC lib/util/crc32.o 00:20:35.295 CC lib/util/crc16.o 00:20:35.295 CC lib/ioat/ioat.o 00:20:35.295 CC lib/util/crc32c.o 00:20:35.295 CC lib/vfio_user/host/vfio_user_pci.o 00:20:35.295 CC lib/util/crc32_ieee.o 00:20:35.295 LIB libspdk_dma.a 00:20:35.295 CC lib/vfio_user/host/vfio_user.o 00:20:35.295 CC lib/util/crc64.o 00:20:35.295 SO libspdk_dma.so.5.0 00:20:35.295 CC lib/util/dif.o 00:20:35.295 CC lib/util/fd.o 00:20:35.295 CC lib/util/fd_group.o 00:20:35.295 SYMLINK libspdk_dma.so 00:20:35.295 CC lib/util/file.o 00:20:35.295 CC lib/util/hexlify.o 00:20:35.295 LIB libspdk_ioat.a 00:20:35.295 SO libspdk_ioat.so.7.0 00:20:35.295 CC lib/util/iov.o 00:20:35.295 SYMLINK libspdk_ioat.so 00:20:35.295 CC lib/util/math.o 00:20:35.295 CC lib/util/net.o 00:20:35.295 CC lib/util/pipe.o 00:20:35.295 CC lib/util/strerror_tls.o 00:20:35.295 LIB libspdk_vfio_user.a 00:20:35.295 CC lib/util/string.o 00:20:35.295 SO libspdk_vfio_user.so.5.0 00:20:35.295 SYMLINK libspdk_vfio_user.so 00:20:35.295 CC lib/util/uuid.o 00:20:35.295 CC lib/util/xor.o 00:20:35.295 CC lib/util/zipf.o 00:20:35.295 CC lib/util/md5.o 00:20:35.295 LIB libspdk_util.a 00:20:35.295 SO libspdk_util.so.10.1 00:20:35.295 LIB libspdk_trace_parser.a 00:20:35.295 SO libspdk_trace_parser.so.6.0 00:20:35.554 SYMLINK libspdk_util.so 00:20:35.554 SYMLINK libspdk_trace_parser.so 00:20:35.554 CC lib/conf/conf.o 00:20:35.554 CC lib/vmd/vmd.o 00:20:35.554 CC lib/vmd/led.o 00:20:35.554 CC lib/rdma_utils/rdma_utils.o 00:20:35.554 CC lib/idxd/idxd.o 00:20:35.554 CC lib/env_dpdk/env.o 00:20:35.554 CC lib/idxd/idxd_user.o 00:20:35.554 CC lib/idxd/idxd_kernel.o 00:20:35.554 CC lib/env_dpdk/memory.o 00:20:35.554 CC lib/json/json_parse.o 00:20:35.812 CC lib/json/json_util.o 00:20:35.812 CC lib/json/json_write.o 00:20:35.812 LIB libspdk_conf.a 00:20:35.812 SO libspdk_conf.so.6.0 00:20:36.070 CC lib/env_dpdk/pci.o 00:20:36.070 SYMLINK libspdk_conf.so 00:20:36.070 CC lib/env_dpdk/init.o 00:20:36.070 CC lib/env_dpdk/threads.o 00:20:36.070 LIB libspdk_rdma_utils.a 00:20:36.070 SO libspdk_rdma_utils.so.1.0 00:20:36.070 CC lib/env_dpdk/pci_ioat.o 00:20:36.070 SYMLINK libspdk_rdma_utils.so 00:20:36.070 CC lib/env_dpdk/pci_virtio.o 00:20:36.070 LIB libspdk_json.a 00:20:36.328 SO libspdk_json.so.6.0 00:20:36.328 CC lib/env_dpdk/pci_vmd.o 00:20:36.328 CC lib/env_dpdk/pci_idxd.o 00:20:36.328 CC lib/env_dpdk/pci_event.o 00:20:36.328 SYMLINK libspdk_json.so 00:20:36.328 CC lib/env_dpdk/sigbus_handler.o 00:20:36.328 CC lib/env_dpdk/pci_dpdk.o 00:20:36.328 CC lib/env_dpdk/pci_dpdk_2207.o 00:20:36.587 CC lib/env_dpdk/pci_dpdk_2211.o 00:20:36.587 CC lib/rdma_provider/common.o 00:20:36.587 CC lib/rdma_provider/rdma_provider_verbs.o 00:20:36.587 LIB libspdk_idxd.a 00:20:36.587 LIB libspdk_vmd.a 00:20:36.845 SO libspdk_idxd.so.12.1 00:20:36.845 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:20:36.845 CC lib/jsonrpc/jsonrpc_server.o 00:20:36.845 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:20:36.845 CC lib/jsonrpc/jsonrpc_client.o 00:20:36.845 SO libspdk_vmd.so.6.0 00:20:36.845 SYMLINK libspdk_idxd.so 00:20:36.845 SYMLINK libspdk_vmd.so 00:20:36.845 LIB libspdk_rdma_provider.a 00:20:36.845 SO libspdk_rdma_provider.so.7.0 00:20:37.103 SYMLINK libspdk_rdma_provider.so 00:20:37.103 LIB libspdk_jsonrpc.a 00:20:37.103 SO libspdk_jsonrpc.so.6.0 00:20:37.362 SYMLINK libspdk_jsonrpc.so 00:20:37.622 CC lib/rpc/rpc.o 00:20:37.622 LIB libspdk_env_dpdk.a 00:20:37.622 SO libspdk_env_dpdk.so.15.1 00:20:37.881 LIB libspdk_rpc.a 00:20:37.881 SO libspdk_rpc.so.6.0 00:20:37.881 SYMLINK libspdk_rpc.so 00:20:37.881 SYMLINK libspdk_env_dpdk.so 00:20:38.139 CC lib/notify/notify.o 00:20:38.139 CC lib/notify/notify_rpc.o 00:20:38.139 CC lib/keyring/keyring.o 00:20:38.139 CC lib/keyring/keyring_rpc.o 00:20:38.139 CC lib/trace/trace_flags.o 00:20:38.139 CC lib/trace/trace.o 00:20:38.139 CC lib/trace/trace_rpc.o 00:20:38.398 LIB libspdk_notify.a 00:20:38.398 SO libspdk_notify.so.6.0 00:20:38.398 LIB libspdk_keyring.a 00:20:38.398 SYMLINK libspdk_notify.so 00:20:38.398 SO libspdk_keyring.so.2.0 00:20:38.398 LIB libspdk_trace.a 00:20:38.398 SYMLINK libspdk_keyring.so 00:20:38.398 SO libspdk_trace.so.11.0 00:20:38.657 SYMLINK libspdk_trace.so 00:20:38.915 CC lib/thread/thread.o 00:20:38.915 CC lib/thread/iobuf.o 00:20:38.915 CC lib/sock/sock.o 00:20:38.915 CC lib/sock/sock_rpc.o 00:20:39.483 LIB libspdk_sock.a 00:20:39.483 SO libspdk_sock.so.10.0 00:20:39.483 SYMLINK libspdk_sock.so 00:20:39.741 CC lib/nvme/nvme_ctrlr_cmd.o 00:20:39.741 CC lib/nvme/nvme_fabric.o 00:20:39.741 CC lib/nvme/nvme_ctrlr.o 00:20:39.741 CC lib/nvme/nvme_ns_cmd.o 00:20:39.741 CC lib/nvme/nvme_ns.o 00:20:39.741 CC lib/nvme/nvme_pcie_common.o 00:20:39.741 CC lib/nvme/nvme_qpair.o 00:20:39.741 CC lib/nvme/nvme_pcie.o 00:20:39.741 CC lib/nvme/nvme.o 00:20:40.675 CC lib/nvme/nvme_quirks.o 00:20:40.675 CC lib/nvme/nvme_transport.o 00:20:40.934 CC lib/nvme/nvme_discovery.o 00:20:40.934 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:20:40.934 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:20:40.934 LIB libspdk_thread.a 00:20:40.934 CC lib/nvme/nvme_tcp.o 00:20:40.934 SO libspdk_thread.so.11.0 00:20:41.191 CC lib/nvme/nvme_opal.o 00:20:41.191 SYMLINK libspdk_thread.so 00:20:41.192 CC lib/nvme/nvme_io_msg.o 00:20:41.192 CC lib/nvme/nvme_poll_group.o 00:20:41.450 CC lib/nvme/nvme_zns.o 00:20:41.450 CC lib/nvme/nvme_stubs.o 00:20:41.450 CC lib/nvme/nvme_auth.o 00:20:41.709 CC lib/nvme/nvme_cuse.o 00:20:41.709 CC lib/nvme/nvme_rdma.o 00:20:41.967 CC lib/accel/accel.o 00:20:42.225 CC lib/accel/accel_rpc.o 00:20:42.225 CC lib/accel/accel_sw.o 00:20:42.225 CC lib/blob/blobstore.o 00:20:42.225 CC lib/init/json_config.o 00:20:42.225 CC lib/blob/request.o 00:20:42.484 CC lib/blob/zeroes.o 00:20:42.484 CC lib/init/subsystem.o 00:20:42.747 CC lib/init/subsystem_rpc.o 00:20:42.747 CC lib/blob/blob_bs_dev.o 00:20:42.747 CC lib/init/rpc.o 00:20:43.007 CC lib/virtio/virtio.o 00:20:43.007 CC lib/virtio/virtio_vhost_user.o 00:20:43.007 CC lib/virtio/virtio_vfio_user.o 00:20:43.007 CC lib/virtio/virtio_pci.o 00:20:43.007 CC lib/fsdev/fsdev.o 00:20:43.007 LIB libspdk_init.a 00:20:43.007 CC lib/fsdev/fsdev_io.o 00:20:43.007 SO libspdk_init.so.6.0 00:20:43.266 SYMLINK libspdk_init.so 00:20:43.266 CC lib/fsdev/fsdev_rpc.o 00:20:43.266 LIB libspdk_virtio.a 00:20:43.524 SO libspdk_virtio.so.7.0 00:20:43.524 CC lib/event/app.o 00:20:43.524 CC lib/event/reactor.o 00:20:43.524 CC lib/event/log_rpc.o 00:20:43.524 CC lib/event/app_rpc.o 00:20:43.524 SYMLINK libspdk_virtio.so 00:20:43.524 CC lib/event/scheduler_static.o 00:20:43.524 LIB libspdk_accel.a 00:20:43.524 SO libspdk_accel.so.16.0 00:20:43.524 LIB libspdk_nvme.a 00:20:43.524 SYMLINK libspdk_accel.so 00:20:43.782 LIB libspdk_fsdev.a 00:20:43.782 SO libspdk_nvme.so.15.0 00:20:43.782 SO libspdk_fsdev.so.2.0 00:20:43.782 CC lib/bdev/bdev.o 00:20:43.782 CC lib/bdev/bdev_rpc.o 00:20:43.782 CC lib/bdev/bdev_zone.o 00:20:43.782 CC lib/bdev/part.o 00:20:43.782 CC lib/bdev/scsi_nvme.o 00:20:44.041 SYMLINK libspdk_fsdev.so 00:20:44.041 LIB libspdk_event.a 00:20:44.041 SO libspdk_event.so.14.0 00:20:44.041 SYMLINK libspdk_nvme.so 00:20:44.041 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:20:44.299 SYMLINK libspdk_event.so 00:20:44.895 LIB libspdk_fuse_dispatcher.a 00:20:44.895 SO libspdk_fuse_dispatcher.so.1.0 00:20:45.153 SYMLINK libspdk_fuse_dispatcher.so 00:20:46.529 LIB libspdk_blob.a 00:20:46.787 SO libspdk_blob.so.12.0 00:20:46.787 SYMLINK libspdk_blob.so 00:20:47.044 CC lib/blobfs/blobfs.o 00:20:47.044 CC lib/blobfs/tree.o 00:20:47.044 CC lib/lvol/lvol.o 00:20:47.978 LIB libspdk_bdev.a 00:20:47.978 SO libspdk_bdev.so.17.0 00:20:47.978 SYMLINK libspdk_bdev.so 00:20:48.235 LIB libspdk_blobfs.a 00:20:48.235 CC lib/ublk/ublk.o 00:20:48.235 CC lib/nvmf/ctrlr.o 00:20:48.235 CC lib/ublk/ublk_rpc.o 00:20:48.235 CC lib/nvmf/ctrlr_bdev.o 00:20:48.235 CC lib/nvmf/ctrlr_discovery.o 00:20:48.235 SO libspdk_blobfs.so.11.0 00:20:48.235 CC lib/ftl/ftl_core.o 00:20:48.235 CC lib/nbd/nbd.o 00:20:48.235 CC lib/scsi/dev.o 00:20:48.235 LIB libspdk_lvol.a 00:20:48.235 SYMLINK libspdk_blobfs.so 00:20:48.493 CC lib/scsi/lun.o 00:20:48.493 SO libspdk_lvol.so.11.0 00:20:48.493 SYMLINK libspdk_lvol.so 00:20:48.493 CC lib/nbd/nbd_rpc.o 00:20:48.493 CC lib/ftl/ftl_init.o 00:20:48.493 CC lib/scsi/port.o 00:20:48.751 CC lib/scsi/scsi.o 00:20:48.751 CC lib/nvmf/subsystem.o 00:20:48.751 CC lib/ftl/ftl_layout.o 00:20:48.751 CC lib/ftl/ftl_debug.o 00:20:48.751 CC lib/scsi/scsi_bdev.o 00:20:48.751 LIB libspdk_nbd.a 00:20:48.751 CC lib/scsi/scsi_pr.o 00:20:48.751 SO libspdk_nbd.so.7.0 00:20:49.009 SYMLINK libspdk_nbd.so 00:20:49.009 CC lib/scsi/scsi_rpc.o 00:20:49.009 CC lib/scsi/task.o 00:20:49.009 CC lib/nvmf/nvmf.o 00:20:49.009 CC lib/ftl/ftl_io.o 00:20:49.267 CC lib/ftl/ftl_sb.o 00:20:49.267 LIB libspdk_ublk.a 00:20:49.267 SO libspdk_ublk.so.3.0 00:20:49.267 CC lib/nvmf/nvmf_rpc.o 00:20:49.267 CC lib/nvmf/transport.o 00:20:49.267 SYMLINK libspdk_ublk.so 00:20:49.267 CC lib/nvmf/tcp.o 00:20:49.267 CC lib/nvmf/stubs.o 00:20:49.267 CC lib/ftl/ftl_l2p.o 00:20:49.525 CC lib/ftl/ftl_l2p_flat.o 00:20:49.525 LIB libspdk_scsi.a 00:20:49.525 SO libspdk_scsi.so.9.0 00:20:49.525 SYMLINK libspdk_scsi.so 00:20:49.525 CC lib/ftl/ftl_nv_cache.o 00:20:49.525 CC lib/ftl/ftl_band.o 00:20:49.782 CC lib/nvmf/mdns_server.o 00:20:49.782 CC lib/nvmf/rdma.o 00:20:50.359 CC lib/nvmf/auth.o 00:20:50.359 CC lib/ftl/ftl_band_ops.o 00:20:50.359 CC lib/ftl/ftl_writer.o 00:20:50.359 CC lib/iscsi/conn.o 00:20:50.359 CC lib/ftl/ftl_rq.o 00:20:50.359 CC lib/vhost/vhost.o 00:20:50.616 CC lib/ftl/ftl_reloc.o 00:20:50.616 CC lib/vhost/vhost_rpc.o 00:20:50.616 CC lib/vhost/vhost_scsi.o 00:20:50.874 CC lib/ftl/ftl_l2p_cache.o 00:20:50.874 CC lib/ftl/ftl_p2l.o 00:20:51.132 CC lib/vhost/vhost_blk.o 00:20:51.132 CC lib/iscsi/init_grp.o 00:20:51.132 CC lib/iscsi/iscsi.o 00:20:51.390 CC lib/ftl/ftl_p2l_log.o 00:20:51.390 CC lib/vhost/rte_vhost_user.o 00:20:51.391 CC lib/iscsi/param.o 00:20:51.391 CC lib/ftl/mngt/ftl_mngt.o 00:20:51.649 CC lib/iscsi/portal_grp.o 00:20:51.649 CC lib/iscsi/tgt_node.o 00:20:51.649 CC lib/iscsi/iscsi_subsystem.o 00:20:51.649 CC lib/iscsi/iscsi_rpc.o 00:20:51.913 CC lib/iscsi/task.o 00:20:51.913 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:20:51.913 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_startup.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_md.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_misc.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_band.o 00:20:52.171 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:20:52.429 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:20:52.429 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:20:52.429 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:20:52.429 CC lib/ftl/utils/ftl_conf.o 00:20:52.429 CC lib/ftl/utils/ftl_md.o 00:20:52.429 CC lib/ftl/utils/ftl_mempool.o 00:20:52.429 CC lib/ftl/utils/ftl_bitmap.o 00:20:52.686 LIB libspdk_vhost.a 00:20:52.686 CC lib/ftl/utils/ftl_property.o 00:20:52.686 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:20:52.686 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:20:52.686 SO libspdk_vhost.so.8.0 00:20:52.686 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:20:52.686 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:20:52.943 SYMLINK libspdk_vhost.so 00:20:52.943 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:20:52.943 LIB libspdk_nvmf.a 00:20:52.943 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:20:52.943 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:20:52.943 CC lib/ftl/upgrade/ftl_sb_v3.o 00:20:52.943 CC lib/ftl/upgrade/ftl_sb_v5.o 00:20:52.943 CC lib/ftl/nvc/ftl_nvc_dev.o 00:20:52.943 SO libspdk_nvmf.so.20.0 00:20:53.200 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:20:53.200 LIB libspdk_iscsi.a 00:20:53.200 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:20:53.200 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:20:53.200 CC lib/ftl/base/ftl_base_dev.o 00:20:53.200 CC lib/ftl/base/ftl_base_bdev.o 00:20:53.200 SO libspdk_iscsi.so.8.0 00:20:53.200 CC lib/ftl/ftl_trace.o 00:20:53.200 SYMLINK libspdk_nvmf.so 00:20:53.458 SYMLINK libspdk_iscsi.so 00:20:53.715 LIB libspdk_ftl.a 00:20:53.972 SO libspdk_ftl.so.9.0 00:20:54.230 SYMLINK libspdk_ftl.so 00:20:54.487 CC module/env_dpdk/env_dpdk_rpc.o 00:20:54.487 CC module/keyring/file/keyring.o 00:20:54.745 CC module/keyring/linux/keyring.o 00:20:54.745 CC module/sock/posix/posix.o 00:20:54.745 CC module/blob/bdev/blob_bdev.o 00:20:54.745 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:20:54.745 CC module/accel/error/accel_error.o 00:20:54.745 CC module/fsdev/aio/fsdev_aio.o 00:20:54.745 CC module/scheduler/dynamic/scheduler_dynamic.o 00:20:54.745 CC module/scheduler/gscheduler/gscheduler.o 00:20:54.745 LIB libspdk_env_dpdk_rpc.a 00:20:54.745 SO libspdk_env_dpdk_rpc.so.6.0 00:20:54.745 SYMLINK libspdk_env_dpdk_rpc.so 00:20:54.745 CC module/keyring/linux/keyring_rpc.o 00:20:54.745 CC module/keyring/file/keyring_rpc.o 00:20:54.745 LIB libspdk_scheduler_gscheduler.a 00:20:54.745 LIB libspdk_scheduler_dpdk_governor.a 00:20:54.745 SO libspdk_scheduler_gscheduler.so.4.0 00:20:54.745 SO libspdk_scheduler_dpdk_governor.so.4.0 00:20:55.003 LIB libspdk_scheduler_dynamic.a 00:20:55.003 CC module/accel/error/accel_error_rpc.o 00:20:55.003 SO libspdk_scheduler_dynamic.so.4.0 00:20:55.004 LIB libspdk_keyring_linux.a 00:20:55.004 LIB libspdk_keyring_file.a 00:20:55.004 SYMLINK libspdk_scheduler_gscheduler.so 00:20:55.004 SO libspdk_keyring_linux.so.1.0 00:20:55.004 SYMLINK libspdk_scheduler_dpdk_governor.so 00:20:55.004 CC module/fsdev/aio/fsdev_aio_rpc.o 00:20:55.004 CC module/accel/ioat/accel_ioat.o 00:20:55.004 CC module/accel/ioat/accel_ioat_rpc.o 00:20:55.004 SO libspdk_keyring_file.so.2.0 00:20:55.004 SYMLINK libspdk_scheduler_dynamic.so 00:20:55.004 LIB libspdk_blob_bdev.a 00:20:55.004 SYMLINK libspdk_keyring_linux.so 00:20:55.004 SO libspdk_blob_bdev.so.12.0 00:20:55.004 SYMLINK libspdk_keyring_file.so 00:20:55.004 LIB libspdk_accel_error.a 00:20:55.004 CC module/fsdev/aio/linux_aio_mgr.o 00:20:55.004 SYMLINK libspdk_blob_bdev.so 00:20:55.004 SO libspdk_accel_error.so.2.0 00:20:55.262 SYMLINK libspdk_accel_error.so 00:20:55.262 LIB libspdk_accel_ioat.a 00:20:55.262 CC module/accel/iaa/accel_iaa.o 00:20:55.262 CC module/accel/dsa/accel_dsa.o 00:20:55.262 SO libspdk_accel_ioat.so.6.0 00:20:55.262 CC module/accel/iaa/accel_iaa_rpc.o 00:20:55.262 SYMLINK libspdk_accel_ioat.so 00:20:55.262 CC module/accel/dsa/accel_dsa_rpc.o 00:20:55.520 CC module/bdev/error/vbdev_error.o 00:20:55.520 CC module/bdev/delay/vbdev_delay.o 00:20:55.520 CC module/bdev/gpt/gpt.o 00:20:55.520 CC module/blobfs/bdev/blobfs_bdev.o 00:20:55.520 CC module/bdev/delay/vbdev_delay_rpc.o 00:20:55.520 CC module/bdev/error/vbdev_error_rpc.o 00:20:55.520 LIB libspdk_accel_iaa.a 00:20:55.520 LIB libspdk_accel_dsa.a 00:20:55.520 LIB libspdk_fsdev_aio.a 00:20:55.520 LIB libspdk_sock_posix.a 00:20:55.520 SO libspdk_accel_iaa.so.3.0 00:20:55.520 SO libspdk_accel_dsa.so.5.0 00:20:55.520 SO libspdk_sock_posix.so.6.0 00:20:55.520 SO libspdk_fsdev_aio.so.1.0 00:20:55.778 CC module/bdev/gpt/vbdev_gpt.o 00:20:55.778 SYMLINK libspdk_accel_iaa.so 00:20:55.778 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:20:55.778 SYMLINK libspdk_accel_dsa.so 00:20:55.778 SYMLINK libspdk_fsdev_aio.so 00:20:55.778 LIB libspdk_bdev_error.a 00:20:55.778 SYMLINK libspdk_sock_posix.so 00:20:55.778 SO libspdk_bdev_error.so.6.0 00:20:55.778 SYMLINK libspdk_bdev_error.so 00:20:55.778 LIB libspdk_bdev_delay.a 00:20:55.778 CC module/bdev/malloc/bdev_malloc.o 00:20:55.778 CC module/bdev/null/bdev_null.o 00:20:55.778 LIB libspdk_blobfs_bdev.a 00:20:55.778 CC module/bdev/lvol/vbdev_lvol.o 00:20:56.036 CC module/bdev/passthru/vbdev_passthru.o 00:20:56.036 CC module/bdev/nvme/bdev_nvme.o 00:20:56.036 CC module/bdev/raid/bdev_raid.o 00:20:56.036 SO libspdk_bdev_delay.so.6.0 00:20:56.036 SO libspdk_blobfs_bdev.so.6.0 00:20:56.036 SYMLINK libspdk_bdev_delay.so 00:20:56.036 SYMLINK libspdk_blobfs_bdev.so 00:20:56.036 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:20:56.036 LIB libspdk_bdev_gpt.a 00:20:56.036 CC module/bdev/split/vbdev_split.o 00:20:56.036 SO libspdk_bdev_gpt.so.6.0 00:20:56.036 SYMLINK libspdk_bdev_gpt.so 00:20:56.295 CC module/bdev/zone_block/vbdev_zone_block.o 00:20:56.295 CC module/bdev/null/bdev_null_rpc.o 00:20:56.295 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:20:56.295 CC module/bdev/xnvme/bdev_xnvme.o 00:20:56.295 CC module/bdev/split/vbdev_split_rpc.o 00:20:56.295 CC module/bdev/malloc/bdev_malloc_rpc.o 00:20:56.554 LIB libspdk_bdev_null.a 00:20:56.554 LIB libspdk_bdev_passthru.a 00:20:56.554 SO libspdk_bdev_null.so.6.0 00:20:56.554 LIB libspdk_bdev_split.a 00:20:56.554 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:20:56.554 SO libspdk_bdev_passthru.so.6.0 00:20:56.554 SO libspdk_bdev_split.so.6.0 00:20:56.554 SYMLINK libspdk_bdev_null.so 00:20:56.554 LIB libspdk_bdev_lvol.a 00:20:56.554 CC module/bdev/raid/bdev_raid_rpc.o 00:20:56.554 LIB libspdk_bdev_malloc.a 00:20:56.554 SYMLINK libspdk_bdev_split.so 00:20:56.554 SO libspdk_bdev_malloc.so.6.0 00:20:56.554 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:20:56.554 SO libspdk_bdev_lvol.so.6.0 00:20:56.554 SYMLINK libspdk_bdev_passthru.so 00:20:56.554 CC module/bdev/raid/bdev_raid_sb.o 00:20:56.813 SYMLINK libspdk_bdev_malloc.so 00:20:56.813 LIB libspdk_bdev_xnvme.a 00:20:56.813 SYMLINK libspdk_bdev_lvol.so 00:20:56.813 CC module/bdev/nvme/bdev_nvme_rpc.o 00:20:56.813 SO libspdk_bdev_xnvme.so.3.0 00:20:56.813 CC module/bdev/aio/bdev_aio.o 00:20:56.813 CC module/bdev/ftl/bdev_ftl.o 00:20:56.813 LIB libspdk_bdev_zone_block.a 00:20:56.813 SYMLINK libspdk_bdev_xnvme.so 00:20:56.813 CC module/bdev/nvme/nvme_rpc.o 00:20:56.813 SO libspdk_bdev_zone_block.so.6.0 00:20:56.813 CC module/bdev/iscsi/bdev_iscsi.o 00:20:57.071 SYMLINK libspdk_bdev_zone_block.so 00:20:57.071 CC module/bdev/nvme/bdev_mdns_client.o 00:20:57.071 CC module/bdev/nvme/vbdev_opal.o 00:20:57.071 CC module/bdev/virtio/bdev_virtio_scsi.o 00:20:57.071 CC module/bdev/nvme/vbdev_opal_rpc.o 00:20:57.071 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:20:57.071 CC module/bdev/ftl/bdev_ftl_rpc.o 00:20:57.329 CC module/bdev/raid/raid0.o 00:20:57.329 CC module/bdev/aio/bdev_aio_rpc.o 00:20:57.329 CC module/bdev/virtio/bdev_virtio_blk.o 00:20:57.329 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:20:57.329 CC module/bdev/raid/raid1.o 00:20:57.329 CC module/bdev/raid/concat.o 00:20:57.329 LIB libspdk_bdev_ftl.a 00:20:57.588 SO libspdk_bdev_ftl.so.6.0 00:20:57.588 LIB libspdk_bdev_aio.a 00:20:57.588 SO libspdk_bdev_aio.so.6.0 00:20:57.588 LIB libspdk_bdev_iscsi.a 00:20:57.588 SYMLINK libspdk_bdev_ftl.so 00:20:57.588 CC module/bdev/virtio/bdev_virtio_rpc.o 00:20:57.588 SO libspdk_bdev_iscsi.so.6.0 00:20:57.588 SYMLINK libspdk_bdev_aio.so 00:20:57.588 SYMLINK libspdk_bdev_iscsi.so 00:20:57.845 LIB libspdk_bdev_raid.a 00:20:57.845 SO libspdk_bdev_raid.so.6.0 00:20:57.845 LIB libspdk_bdev_virtio.a 00:20:57.845 SO libspdk_bdev_virtio.so.6.0 00:20:57.845 SYMLINK libspdk_bdev_raid.so 00:20:58.103 SYMLINK libspdk_bdev_virtio.so 00:20:59.479 LIB libspdk_bdev_nvme.a 00:20:59.738 SO libspdk_bdev_nvme.so.7.1 00:20:59.738 SYMLINK libspdk_bdev_nvme.so 00:21:00.304 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:21:00.304 CC module/event/subsystems/vmd/vmd.o 00:21:00.304 CC module/event/subsystems/vmd/vmd_rpc.o 00:21:00.304 CC module/event/subsystems/fsdev/fsdev.o 00:21:00.304 CC module/event/subsystems/sock/sock.o 00:21:00.304 CC module/event/subsystems/keyring/keyring.o 00:21:00.304 CC module/event/subsystems/scheduler/scheduler.o 00:21:00.304 CC module/event/subsystems/iobuf/iobuf.o 00:21:00.304 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:21:00.304 LIB libspdk_event_scheduler.a 00:21:00.304 LIB libspdk_event_fsdev.a 00:21:00.562 LIB libspdk_event_keyring.a 00:21:00.562 LIB libspdk_event_vhost_blk.a 00:21:00.562 SO libspdk_event_scheduler.so.4.0 00:21:00.562 LIB libspdk_event_vmd.a 00:21:00.562 SO libspdk_event_fsdev.so.1.0 00:21:00.562 LIB libspdk_event_sock.a 00:21:00.562 SO libspdk_event_keyring.so.1.0 00:21:00.562 SO libspdk_event_vhost_blk.so.3.0 00:21:00.562 SO libspdk_event_vmd.so.6.0 00:21:00.562 SO libspdk_event_sock.so.5.0 00:21:00.563 SYMLINK libspdk_event_scheduler.so 00:21:00.563 SYMLINK libspdk_event_fsdev.so 00:21:00.563 SYMLINK libspdk_event_keyring.so 00:21:00.563 LIB libspdk_event_iobuf.a 00:21:00.563 SYMLINK libspdk_event_vhost_blk.so 00:21:00.563 SYMLINK libspdk_event_vmd.so 00:21:00.563 SYMLINK libspdk_event_sock.so 00:21:00.563 SO libspdk_event_iobuf.so.3.0 00:21:00.563 SYMLINK libspdk_event_iobuf.so 00:21:00.821 CC module/event/subsystems/accel/accel.o 00:21:01.081 LIB libspdk_event_accel.a 00:21:01.081 SO libspdk_event_accel.so.6.0 00:21:01.081 SYMLINK libspdk_event_accel.so 00:21:01.340 CC module/event/subsystems/bdev/bdev.o 00:21:01.599 LIB libspdk_event_bdev.a 00:21:01.599 SO libspdk_event_bdev.so.6.0 00:21:01.858 SYMLINK libspdk_event_bdev.so 00:21:01.858 CC module/event/subsystems/nbd/nbd.o 00:21:01.858 CC module/event/subsystems/scsi/scsi.o 00:21:01.858 CC module/event/subsystems/ublk/ublk.o 00:21:01.858 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:21:01.858 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:21:02.116 LIB libspdk_event_nbd.a 00:21:02.116 LIB libspdk_event_ublk.a 00:21:02.116 SO libspdk_event_nbd.so.6.0 00:21:02.116 SO libspdk_event_ublk.so.3.0 00:21:02.116 LIB libspdk_event_scsi.a 00:21:02.116 SO libspdk_event_scsi.so.6.0 00:21:02.116 SYMLINK libspdk_event_nbd.so 00:21:02.116 SYMLINK libspdk_event_ublk.so 00:21:02.374 SYMLINK libspdk_event_scsi.so 00:21:02.374 LIB libspdk_event_nvmf.a 00:21:02.374 SO libspdk_event_nvmf.so.6.0 00:21:02.374 SYMLINK libspdk_event_nvmf.so 00:21:02.374 CC module/event/subsystems/iscsi/iscsi.o 00:21:02.374 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:21:02.631 LIB libspdk_event_vhost_scsi.a 00:21:02.631 LIB libspdk_event_iscsi.a 00:21:02.631 SO libspdk_event_vhost_scsi.so.3.0 00:21:02.631 SO libspdk_event_iscsi.so.6.0 00:21:02.888 SYMLINK libspdk_event_iscsi.so 00:21:02.888 SYMLINK libspdk_event_vhost_scsi.so 00:21:02.888 SO libspdk.so.6.0 00:21:02.888 SYMLINK libspdk.so 00:21:03.145 CXX app/trace/trace.o 00:21:03.145 CC app/trace_record/trace_record.o 00:21:03.145 CC examples/interrupt_tgt/interrupt_tgt.o 00:21:03.145 CC app/iscsi_tgt/iscsi_tgt.o 00:21:03.145 CC app/nvmf_tgt/nvmf_main.o 00:21:03.145 CC examples/ioat/perf/perf.o 00:21:03.403 CC app/spdk_tgt/spdk_tgt.o 00:21:03.403 CC examples/util/zipf/zipf.o 00:21:03.403 CC test/thread/poller_perf/poller_perf.o 00:21:03.403 CC test/dma/test_dma/test_dma.o 00:21:03.403 LINK nvmf_tgt 00:21:03.403 LINK interrupt_tgt 00:21:03.403 LINK zipf 00:21:03.403 LINK iscsi_tgt 00:21:03.403 LINK poller_perf 00:21:03.403 LINK spdk_tgt 00:21:03.662 LINK ioat_perf 00:21:03.662 LINK spdk_trace_record 00:21:03.662 LINK spdk_trace 00:21:03.919 CC app/spdk_lspci/spdk_lspci.o 00:21:03.920 TEST_HEADER include/spdk/accel.h 00:21:03.920 TEST_HEADER include/spdk/accel_module.h 00:21:03.920 TEST_HEADER include/spdk/assert.h 00:21:03.920 TEST_HEADER include/spdk/barrier.h 00:21:03.920 TEST_HEADER include/spdk/base64.h 00:21:03.920 CC examples/ioat/verify/verify.o 00:21:03.920 TEST_HEADER include/spdk/bdev.h 00:21:03.920 TEST_HEADER include/spdk/bdev_module.h 00:21:03.920 TEST_HEADER include/spdk/bdev_zone.h 00:21:03.920 TEST_HEADER include/spdk/bit_array.h 00:21:03.920 TEST_HEADER include/spdk/bit_pool.h 00:21:03.920 TEST_HEADER include/spdk/blob_bdev.h 00:21:03.920 TEST_HEADER include/spdk/blobfs_bdev.h 00:21:03.920 TEST_HEADER include/spdk/blobfs.h 00:21:03.920 TEST_HEADER include/spdk/blob.h 00:21:03.920 TEST_HEADER include/spdk/conf.h 00:21:03.920 TEST_HEADER include/spdk/config.h 00:21:03.920 TEST_HEADER include/spdk/cpuset.h 00:21:03.920 TEST_HEADER include/spdk/crc16.h 00:21:03.920 TEST_HEADER include/spdk/crc32.h 00:21:03.920 TEST_HEADER include/spdk/crc64.h 00:21:03.920 TEST_HEADER include/spdk/dif.h 00:21:03.920 CC test/app/histogram_perf/histogram_perf.o 00:21:03.920 TEST_HEADER include/spdk/dma.h 00:21:03.920 TEST_HEADER include/spdk/endian.h 00:21:03.920 TEST_HEADER include/spdk/env_dpdk.h 00:21:03.920 TEST_HEADER include/spdk/env.h 00:21:03.920 TEST_HEADER include/spdk/event.h 00:21:03.920 TEST_HEADER include/spdk/fd_group.h 00:21:03.920 TEST_HEADER include/spdk/fd.h 00:21:03.920 CC test/app/bdev_svc/bdev_svc.o 00:21:03.920 TEST_HEADER include/spdk/file.h 00:21:03.920 TEST_HEADER include/spdk/fsdev.h 00:21:03.920 CC app/spdk_nvme_perf/perf.o 00:21:03.920 TEST_HEADER include/spdk/fsdev_module.h 00:21:03.920 TEST_HEADER include/spdk/ftl.h 00:21:03.920 TEST_HEADER include/spdk/fuse_dispatcher.h 00:21:03.920 TEST_HEADER include/spdk/gpt_spec.h 00:21:03.920 TEST_HEADER include/spdk/hexlify.h 00:21:03.920 TEST_HEADER include/spdk/histogram_data.h 00:21:03.920 TEST_HEADER include/spdk/idxd.h 00:21:03.920 TEST_HEADER include/spdk/idxd_spec.h 00:21:03.920 CC examples/thread/thread/thread_ex.o 00:21:03.920 TEST_HEADER include/spdk/init.h 00:21:03.920 TEST_HEADER include/spdk/ioat.h 00:21:03.920 TEST_HEADER include/spdk/ioat_spec.h 00:21:03.920 TEST_HEADER include/spdk/iscsi_spec.h 00:21:03.920 TEST_HEADER include/spdk/json.h 00:21:03.920 TEST_HEADER include/spdk/jsonrpc.h 00:21:03.920 TEST_HEADER include/spdk/keyring.h 00:21:03.920 TEST_HEADER include/spdk/keyring_module.h 00:21:03.920 TEST_HEADER include/spdk/likely.h 00:21:03.920 TEST_HEADER include/spdk/log.h 00:21:03.920 TEST_HEADER include/spdk/lvol.h 00:21:03.920 TEST_HEADER include/spdk/md5.h 00:21:03.920 TEST_HEADER include/spdk/memory.h 00:21:03.920 TEST_HEADER include/spdk/mmio.h 00:21:03.920 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:21:03.920 TEST_HEADER include/spdk/nbd.h 00:21:03.920 TEST_HEADER include/spdk/net.h 00:21:03.920 TEST_HEADER include/spdk/notify.h 00:21:03.920 TEST_HEADER include/spdk/nvme.h 00:21:03.920 TEST_HEADER include/spdk/nvme_intel.h 00:21:03.920 LINK spdk_lspci 00:21:03.920 TEST_HEADER include/spdk/nvme_ocssd.h 00:21:03.920 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:21:03.920 TEST_HEADER include/spdk/nvme_spec.h 00:21:03.920 TEST_HEADER include/spdk/nvme_zns.h 00:21:03.920 TEST_HEADER include/spdk/nvmf_cmd.h 00:21:03.920 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:21:03.920 TEST_HEADER include/spdk/nvmf.h 00:21:03.920 TEST_HEADER include/spdk/nvmf_spec.h 00:21:03.920 TEST_HEADER include/spdk/nvmf_transport.h 00:21:03.920 TEST_HEADER include/spdk/opal.h 00:21:03.920 TEST_HEADER include/spdk/opal_spec.h 00:21:03.920 TEST_HEADER include/spdk/pci_ids.h 00:21:03.920 TEST_HEADER include/spdk/pipe.h 00:21:03.920 TEST_HEADER include/spdk/queue.h 00:21:03.920 LINK test_dma 00:21:03.920 TEST_HEADER include/spdk/reduce.h 00:21:03.920 TEST_HEADER include/spdk/rpc.h 00:21:03.920 TEST_HEADER include/spdk/scheduler.h 00:21:03.920 TEST_HEADER include/spdk/scsi.h 00:21:03.920 TEST_HEADER include/spdk/scsi_spec.h 00:21:03.920 TEST_HEADER include/spdk/sock.h 00:21:03.920 TEST_HEADER include/spdk/stdinc.h 00:21:03.920 TEST_HEADER include/spdk/string.h 00:21:03.920 TEST_HEADER include/spdk/thread.h 00:21:03.920 TEST_HEADER include/spdk/trace.h 00:21:03.920 TEST_HEADER include/spdk/trace_parser.h 00:21:03.920 TEST_HEADER include/spdk/tree.h 00:21:03.920 TEST_HEADER include/spdk/ublk.h 00:21:03.920 TEST_HEADER include/spdk/util.h 00:21:03.920 TEST_HEADER include/spdk/uuid.h 00:21:03.920 TEST_HEADER include/spdk/version.h 00:21:03.920 TEST_HEADER include/spdk/vfio_user_pci.h 00:21:03.920 TEST_HEADER include/spdk/vfio_user_spec.h 00:21:03.920 LINK histogram_perf 00:21:03.920 TEST_HEADER include/spdk/vhost.h 00:21:03.920 TEST_HEADER include/spdk/vmd.h 00:21:04.178 TEST_HEADER include/spdk/xor.h 00:21:04.178 TEST_HEADER include/spdk/zipf.h 00:21:04.178 CXX test/cpp_headers/accel.o 00:21:04.178 LINK bdev_svc 00:21:04.178 LINK verify 00:21:04.178 CC test/env/mem_callbacks/mem_callbacks.o 00:21:04.178 CC test/rpc_client/rpc_client_test.o 00:21:04.178 CXX test/cpp_headers/accel_module.o 00:21:04.178 LINK thread 00:21:04.436 CC test/event/event_perf/event_perf.o 00:21:04.436 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:21:04.436 CC test/event/reactor/reactor.o 00:21:04.436 CC test/event/reactor_perf/reactor_perf.o 00:21:04.436 LINK nvme_fuzz 00:21:04.436 CXX test/cpp_headers/assert.o 00:21:04.436 LINK event_perf 00:21:04.436 LINK rpc_client_test 00:21:04.436 LINK reactor 00:21:04.694 LINK reactor_perf 00:21:04.694 CXX test/cpp_headers/barrier.o 00:21:04.694 CXX test/cpp_headers/base64.o 00:21:04.694 CXX test/cpp_headers/bdev.o 00:21:04.694 CC examples/sock/hello_world/hello_sock.o 00:21:04.694 CC test/event/app_repeat/app_repeat.o 00:21:04.952 CC test/accel/dif/dif.o 00:21:04.952 LINK mem_callbacks 00:21:04.952 CXX test/cpp_headers/bdev_module.o 00:21:04.952 CC test/event/scheduler/scheduler.o 00:21:04.952 LINK spdk_nvme_perf 00:21:04.952 LINK app_repeat 00:21:04.952 CC examples/vmd/lsvmd/lsvmd.o 00:21:04.952 CC examples/idxd/perf/perf.o 00:21:04.952 LINK hello_sock 00:21:05.209 CC test/env/vtophys/vtophys.o 00:21:05.209 CXX test/cpp_headers/bdev_zone.o 00:21:05.210 LINK scheduler 00:21:05.210 LINK lsvmd 00:21:05.210 CXX test/cpp_headers/bit_array.o 00:21:05.210 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:21:05.210 CC app/spdk_nvme_identify/identify.o 00:21:05.210 LINK vtophys 00:21:05.467 CXX test/cpp_headers/bit_pool.o 00:21:05.467 LINK env_dpdk_post_init 00:21:05.467 CC examples/vmd/led/led.o 00:21:05.467 LINK idxd_perf 00:21:05.725 CC examples/fsdev/hello_world/hello_fsdev.o 00:21:05.725 CC examples/accel/perf/accel_perf.o 00:21:05.725 CXX test/cpp_headers/blob_bdev.o 00:21:05.725 LINK led 00:21:05.725 CC examples/blob/hello_world/hello_blob.o 00:21:05.725 CC test/env/memory/memory_ut.o 00:21:05.725 LINK dif 00:21:05.725 CXX test/cpp_headers/blobfs_bdev.o 00:21:05.983 CC examples/nvme/hello_world/hello_world.o 00:21:05.983 CC examples/nvme/reconnect/reconnect.o 00:21:05.983 LINK hello_fsdev 00:21:05.983 LINK hello_blob 00:21:05.983 CC examples/nvme/nvme_manage/nvme_manage.o 00:21:05.983 CXX test/cpp_headers/blobfs.o 00:21:06.240 LINK hello_world 00:21:06.240 LINK accel_perf 00:21:06.240 CC examples/nvme/arbitration/arbitration.o 00:21:06.240 CXX test/cpp_headers/blob.o 00:21:06.240 CC examples/blob/cli/blobcli.o 00:21:06.240 LINK spdk_nvme_identify 00:21:06.498 LINK reconnect 00:21:06.498 CC examples/nvme/hotplug/hotplug.o 00:21:06.498 CXX test/cpp_headers/conf.o 00:21:06.498 CC examples/nvme/cmb_copy/cmb_copy.o 00:21:06.498 CXX test/cpp_headers/config.o 00:21:06.756 CXX test/cpp_headers/cpuset.o 00:21:06.756 CC app/spdk_nvme_discover/discovery_aer.o 00:21:06.756 CC examples/nvme/abort/abort.o 00:21:06.756 LINK arbitration 00:21:06.756 LINK iscsi_fuzz 00:21:06.756 LINK nvme_manage 00:21:06.756 LINK hotplug 00:21:06.756 LINK cmb_copy 00:21:06.756 CXX test/cpp_headers/crc16.o 00:21:06.756 LINK spdk_nvme_discover 00:21:07.013 CXX test/cpp_headers/crc32.o 00:21:07.013 LINK blobcli 00:21:07.014 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:21:07.014 CC test/env/pci/pci_ut.o 00:21:07.014 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:21:07.014 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:21:07.014 CC app/spdk_top/spdk_top.o 00:21:07.014 CXX test/cpp_headers/crc64.o 00:21:07.271 CC examples/bdev/hello_world/hello_bdev.o 00:21:07.271 LINK abort 00:21:07.272 LINK pmr_persistence 00:21:07.272 LINK memory_ut 00:21:07.272 CXX test/cpp_headers/dif.o 00:21:07.272 CC test/app/jsoncat/jsoncat.o 00:21:07.272 CC test/blobfs/mkfs/mkfs.o 00:21:07.530 LINK hello_bdev 00:21:07.530 CC test/app/stub/stub.o 00:21:07.530 LINK jsoncat 00:21:07.530 CXX test/cpp_headers/dma.o 00:21:07.530 CC examples/bdev/bdevperf/bdevperf.o 00:21:07.530 CC app/vhost/vhost.o 00:21:07.530 LINK pci_ut 00:21:07.530 LINK vhost_fuzz 00:21:07.787 LINK mkfs 00:21:07.787 LINK stub 00:21:07.787 CXX test/cpp_headers/endian.o 00:21:07.787 LINK vhost 00:21:07.787 CXX test/cpp_headers/env_dpdk.o 00:21:07.787 CC app/spdk_dd/spdk_dd.o 00:21:08.044 CXX test/cpp_headers/env.o 00:21:08.044 CXX test/cpp_headers/event.o 00:21:08.044 CXX test/cpp_headers/fd_group.o 00:21:08.044 CC test/lvol/esnap/esnap.o 00:21:08.044 CC test/nvme/aer/aer.o 00:21:08.044 CC app/fio/nvme/fio_plugin.o 00:21:08.044 CXX test/cpp_headers/fd.o 00:21:08.044 CXX test/cpp_headers/file.o 00:21:08.044 CC test/bdev/bdevio/bdevio.o 00:21:08.301 CC app/fio/bdev/fio_plugin.o 00:21:08.301 LINK spdk_dd 00:21:08.301 LINK spdk_top 00:21:08.301 CXX test/cpp_headers/fsdev.o 00:21:08.301 CXX test/cpp_headers/fsdev_module.o 00:21:08.559 LINK aer 00:21:08.559 CXX test/cpp_headers/ftl.o 00:21:08.559 CXX test/cpp_headers/fuse_dispatcher.o 00:21:08.559 CC test/nvme/reset/reset.o 00:21:08.559 CXX test/cpp_headers/gpt_spec.o 00:21:08.559 LINK bdevperf 00:21:08.559 LINK bdevio 00:21:08.818 CC test/nvme/sgl/sgl.o 00:21:08.818 CXX test/cpp_headers/hexlify.o 00:21:08.818 CXX test/cpp_headers/histogram_data.o 00:21:08.818 CC test/nvme/e2edp/nvme_dp.o 00:21:08.818 LINK spdk_nvme 00:21:08.818 LINK reset 00:21:08.818 LINK spdk_bdev 00:21:08.818 CXX test/cpp_headers/idxd.o 00:21:08.818 CXX test/cpp_headers/idxd_spec.o 00:21:09.076 CC test/nvme/overhead/overhead.o 00:21:09.076 CXX test/cpp_headers/init.o 00:21:09.076 CC examples/nvmf/nvmf/nvmf.o 00:21:09.076 CXX test/cpp_headers/ioat.o 00:21:09.076 LINK sgl 00:21:09.076 CXX test/cpp_headers/ioat_spec.o 00:21:09.076 CC test/nvme/err_injection/err_injection.o 00:21:09.076 CC test/nvme/startup/startup.o 00:21:09.076 LINK nvme_dp 00:21:09.334 CXX test/cpp_headers/iscsi_spec.o 00:21:09.334 CC test/nvme/reserve/reserve.o 00:21:09.334 CC test/nvme/simple_copy/simple_copy.o 00:21:09.334 LINK overhead 00:21:09.334 LINK nvmf 00:21:09.334 CXX test/cpp_headers/json.o 00:21:09.334 LINK startup 00:21:09.334 LINK err_injection 00:21:09.334 CC test/nvme/connect_stress/connect_stress.o 00:21:09.592 CC test/nvme/boot_partition/boot_partition.o 00:21:09.592 LINK reserve 00:21:09.592 LINK simple_copy 00:21:09.592 CXX test/cpp_headers/jsonrpc.o 00:21:09.592 CXX test/cpp_headers/keyring.o 00:21:09.592 CC test/nvme/compliance/nvme_compliance.o 00:21:09.592 LINK boot_partition 00:21:09.592 LINK connect_stress 00:21:09.592 CC test/nvme/doorbell_aers/doorbell_aers.o 00:21:09.592 CC test/nvme/fused_ordering/fused_ordering.o 00:21:09.851 CXX test/cpp_headers/keyring_module.o 00:21:09.851 CC test/nvme/fdp/fdp.o 00:21:09.851 CXX test/cpp_headers/likely.o 00:21:09.851 CXX test/cpp_headers/log.o 00:21:09.851 CXX test/cpp_headers/lvol.o 00:21:09.851 CC test/nvme/cuse/cuse.o 00:21:09.851 LINK doorbell_aers 00:21:09.851 LINK fused_ordering 00:21:09.851 CXX test/cpp_headers/md5.o 00:21:10.125 CXX test/cpp_headers/memory.o 00:21:10.125 CXX test/cpp_headers/mmio.o 00:21:10.125 CXX test/cpp_headers/nbd.o 00:21:10.125 LINK nvme_compliance 00:21:10.125 CXX test/cpp_headers/net.o 00:21:10.125 CXX test/cpp_headers/notify.o 00:21:10.125 CXX test/cpp_headers/nvme.o 00:21:10.125 CXX test/cpp_headers/nvme_intel.o 00:21:10.125 LINK fdp 00:21:10.125 CXX test/cpp_headers/nvme_ocssd.o 00:21:10.125 CXX test/cpp_headers/nvme_ocssd_spec.o 00:21:10.414 CXX test/cpp_headers/nvme_spec.o 00:21:10.414 CXX test/cpp_headers/nvme_zns.o 00:21:10.414 CXX test/cpp_headers/nvmf_cmd.o 00:21:10.414 CXX test/cpp_headers/nvmf_fc_spec.o 00:21:10.414 CXX test/cpp_headers/nvmf.o 00:21:10.414 CXX test/cpp_headers/nvmf_spec.o 00:21:10.414 CXX test/cpp_headers/nvmf_transport.o 00:21:10.414 CXX test/cpp_headers/opal.o 00:21:10.415 CXX test/cpp_headers/opal_spec.o 00:21:10.415 CXX test/cpp_headers/pci_ids.o 00:21:10.673 CXX test/cpp_headers/pipe.o 00:21:10.673 CXX test/cpp_headers/queue.o 00:21:10.673 CXX test/cpp_headers/reduce.o 00:21:10.673 CXX test/cpp_headers/rpc.o 00:21:10.673 CXX test/cpp_headers/scheduler.o 00:21:10.673 CXX test/cpp_headers/scsi.o 00:21:10.673 CXX test/cpp_headers/scsi_spec.o 00:21:10.673 CXX test/cpp_headers/sock.o 00:21:10.673 CXX test/cpp_headers/stdinc.o 00:21:10.673 CXX test/cpp_headers/string.o 00:21:10.673 CXX test/cpp_headers/thread.o 00:21:10.931 CXX test/cpp_headers/trace.o 00:21:10.931 CXX test/cpp_headers/trace_parser.o 00:21:10.931 CXX test/cpp_headers/tree.o 00:21:10.931 CXX test/cpp_headers/ublk.o 00:21:10.931 CXX test/cpp_headers/util.o 00:21:10.931 CXX test/cpp_headers/uuid.o 00:21:10.931 CXX test/cpp_headers/version.o 00:21:10.931 CXX test/cpp_headers/vfio_user_pci.o 00:21:10.931 CXX test/cpp_headers/vfio_user_spec.o 00:21:10.931 CXX test/cpp_headers/vhost.o 00:21:10.931 CXX test/cpp_headers/vmd.o 00:21:11.189 CXX test/cpp_headers/xor.o 00:21:11.189 CXX test/cpp_headers/zipf.o 00:21:11.447 LINK cuse 00:21:15.633 LINK esnap 00:21:16.200 00:21:16.200 real 1m42.383s 00:21:16.200 user 9m37.562s 00:21:16.200 sys 1m40.260s 00:21:16.200 06:47:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:21:16.200 06:47:48 make -- common/autotest_common.sh@10 -- $ set +x 00:21:16.200 ************************************ 00:21:16.200 END TEST make 00:21:16.200 ************************************ 00:21:16.200 06:47:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:21:16.200 06:47:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:16.200 06:47:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:16.200 06:47:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.200 06:47:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:16.200 06:47:48 -- pm/common@44 -- $ pid=5332 00:21:16.200 06:47:48 -- pm/common@50 -- $ kill -TERM 5332 00:21:16.200 06:47:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.200 06:47:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:16.200 06:47:48 -- pm/common@44 -- $ pid=5333 00:21:16.200 06:47:48 -- pm/common@50 -- $ kill -TERM 5333 00:21:16.200 06:47:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:21:16.200 06:47:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:21:16.200 06:47:48 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:16.200 06:47:48 -- common/autotest_common.sh@1711 -- # lcov --version 00:21:16.200 06:47:48 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:16.200 06:47:48 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:16.200 06:47:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.200 06:47:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.200 06:47:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.200 06:47:48 -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.200 06:47:48 -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.200 06:47:48 -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.200 06:47:48 -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.200 06:47:48 -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.200 06:47:48 -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.200 06:47:48 -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.200 06:47:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.200 06:47:48 -- scripts/common.sh@344 -- # case "$op" in 00:21:16.200 06:47:48 -- scripts/common.sh@345 -- # : 1 00:21:16.200 06:47:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.200 06:47:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.200 06:47:48 -- scripts/common.sh@365 -- # decimal 1 00:21:16.200 06:47:48 -- scripts/common.sh@353 -- # local d=1 00:21:16.200 06:47:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.200 06:47:48 -- scripts/common.sh@355 -- # echo 1 00:21:16.200 06:47:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.200 06:47:48 -- scripts/common.sh@366 -- # decimal 2 00:21:16.200 06:47:48 -- scripts/common.sh@353 -- # local d=2 00:21:16.200 06:47:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.200 06:47:48 -- scripts/common.sh@355 -- # echo 2 00:21:16.200 06:47:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.200 06:47:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.200 06:47:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.200 06:47:48 -- scripts/common.sh@368 -- # return 0 00:21:16.200 06:47:48 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.200 06:47:48 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:16.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.200 --rc genhtml_branch_coverage=1 00:21:16.200 --rc genhtml_function_coverage=1 00:21:16.200 --rc genhtml_legend=1 00:21:16.200 --rc geninfo_all_blocks=1 00:21:16.200 --rc geninfo_unexecuted_blocks=1 00:21:16.200 00:21:16.200 ' 00:21:16.200 06:47:48 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:16.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.200 --rc genhtml_branch_coverage=1 00:21:16.200 --rc genhtml_function_coverage=1 00:21:16.200 --rc genhtml_legend=1 00:21:16.200 --rc geninfo_all_blocks=1 00:21:16.200 --rc geninfo_unexecuted_blocks=1 00:21:16.200 00:21:16.200 ' 00:21:16.200 06:47:48 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:16.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.200 --rc genhtml_branch_coverage=1 00:21:16.200 --rc genhtml_function_coverage=1 00:21:16.200 --rc genhtml_legend=1 00:21:16.200 --rc geninfo_all_blocks=1 00:21:16.200 --rc geninfo_unexecuted_blocks=1 00:21:16.200 00:21:16.200 ' 00:21:16.200 06:47:48 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:16.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.200 --rc genhtml_branch_coverage=1 00:21:16.200 --rc genhtml_function_coverage=1 00:21:16.200 --rc genhtml_legend=1 00:21:16.200 --rc geninfo_all_blocks=1 00:21:16.200 --rc geninfo_unexecuted_blocks=1 00:21:16.200 00:21:16.200 ' 00:21:16.200 06:47:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.200 06:47:48 -- nvmf/common.sh@7 -- # uname -s 00:21:16.200 06:47:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.200 06:47:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.200 06:47:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.200 06:47:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.200 06:47:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.200 06:47:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.200 06:47:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.200 06:47:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.200 06:47:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.200 06:47:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.200 06:47:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7a10858d-5a4c-4885-924e-f934236c3390 00:21:16.200 06:47:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=7a10858d-5a4c-4885-924e-f934236c3390 00:21:16.200 06:47:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.200 06:47:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.200 06:47:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:16.200 06:47:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.459 06:47:48 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.459 06:47:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.459 06:47:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.459 06:47:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.459 06:47:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.460 06:47:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.460 06:47:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.460 06:47:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.460 06:47:48 -- paths/export.sh@5 -- # export PATH 00:21:16.460 06:47:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.460 06:47:48 -- nvmf/common.sh@51 -- # : 0 00:21:16.460 06:47:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.460 06:47:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.460 06:47:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.460 06:47:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.460 06:47:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.460 06:47:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.460 06:47:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.460 06:47:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.460 06:47:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.460 06:47:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:21:16.460 06:47:48 -- spdk/autotest.sh@32 -- # uname -s 00:21:16.460 06:47:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:21:16.460 06:47:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:21:16.460 06:47:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:16.460 06:47:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:21:16.460 06:47:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:16.460 06:47:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:21:16.460 06:47:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:21:16.460 06:47:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:21:16.460 06:47:48 -- spdk/autotest.sh@48 -- # udevadm_pid=54937 00:21:16.460 06:47:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:21:16.460 06:47:48 -- pm/common@17 -- # local monitor 00:21:16.460 06:47:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.460 06:47:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.460 06:47:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:21:16.460 06:47:48 -- pm/common@21 -- # date +%s 00:21:16.460 06:47:48 -- pm/common@25 -- # sleep 1 00:21:16.460 06:47:48 -- pm/common@21 -- # date +%s 00:21:16.460 06:47:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733467668 00:21:16.460 06:47:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733467668 00:21:16.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733467668_collect-cpu-load.pm.log 00:21:16.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733467668_collect-vmstat.pm.log 00:21:17.414 06:47:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:21:17.414 06:47:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:21:17.414 06:47:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.414 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:21:17.414 06:47:49 -- spdk/autotest.sh@59 -- # create_test_list 00:21:17.414 06:47:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:21:17.414 06:47:49 -- common/autotest_common.sh@10 -- # set +x 00:21:17.414 06:47:49 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:21:17.414 06:47:49 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:21:17.414 06:47:49 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:21:17.414 06:47:49 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:21:17.414 06:47:49 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:21:17.414 06:47:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:21:17.414 06:47:49 -- common/autotest_common.sh@1457 -- # uname 00:21:17.414 06:47:49 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:21:17.414 06:47:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:21:17.414 06:47:49 -- common/autotest_common.sh@1477 -- # uname 00:21:17.414 06:47:49 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:21:17.414 06:47:49 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:21:17.414 06:47:49 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:21:17.695 lcov: LCOV version 1.15 00:21:17.695 06:47:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:21:35.777 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:21:35.777 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:21:53.910 06:48:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:21:53.910 06:48:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.910 06:48:25 -- common/autotest_common.sh@10 -- # set +x 00:21:53.910 06:48:25 -- spdk/autotest.sh@78 -- # rm -f 00:21:53.910 06:48:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:53.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.910 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:53.910 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:53.910 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:21:53.910 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:21:53.910 06:48:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:21:53.910 06:48:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:21:53.910 06:48:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:21:53.910 06:48:26 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:21:53.910 06:48:26 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:21:54.168 06:48:26 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:21:54.168 06:48:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:21:54.168 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:21:54.168 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:21:54.168 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:21:54.168 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:21:54.168 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:21:54.168 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.168 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:21:54.168 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:21:54.168 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.168 06:48:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:54.169 06:48:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:21:54.169 06:48:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:54.169 06:48:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:21:54.169 06:48:26 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:21:54.169 06:48:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:21:54.169 06:48:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:54.169 06:48:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:21:54.169 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.169 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.169 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:21:54.169 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:21:54.169 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:21:54.169 No valid GPT data, bailing 00:21:54.169 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:54.169 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.169 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.169 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:21:54.169 1+0 records in 00:21:54.169 1+0 records out 00:21:54.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104763 s, 100 MB/s 00:21:54.169 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.169 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.169 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:21:54.169 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:21:54.169 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:21:54.169 No valid GPT data, bailing 00:21:54.169 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:54.169 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.169 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.169 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:21:54.169 1+0 records in 00:21:54.169 1+0 records out 00:21:54.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342064 s, 307 MB/s 00:21:54.169 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.169 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.169 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:21:54.169 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:21:54.169 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:21:54.428 No valid GPT data, bailing 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.428 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.428 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:21:54.428 1+0 records in 00:21:54.428 1+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402925 s, 260 MB/s 00:21:54.428 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.428 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.428 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:21:54.428 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:21:54.428 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:21:54.428 No valid GPT data, bailing 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.428 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.428 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:21:54.428 1+0 records in 00:21:54.428 1+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377355 s, 278 MB/s 00:21:54.428 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.428 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.428 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:21:54.428 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:21:54.428 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:21:54.428 No valid GPT data, bailing 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.428 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.428 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:21:54.428 1+0 records in 00:21:54.428 1+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421804 s, 249 MB/s 00:21:54.428 06:48:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:21:54.428 06:48:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:21:54.428 06:48:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:21:54.428 06:48:26 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:21:54.428 06:48:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:21:54.428 No valid GPT data, bailing 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:21:54.428 06:48:26 -- scripts/common.sh@394 -- # pt= 00:21:54.428 06:48:26 -- scripts/common.sh@395 -- # return 1 00:21:54.428 06:48:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:21:54.428 1+0 records in 00:21:54.428 1+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416063 s, 252 MB/s 00:21:54.428 06:48:27 -- spdk/autotest.sh@105 -- # sync 00:21:54.686 06:48:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:21:54.686 06:48:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:21:54.686 06:48:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:21:56.594 06:48:28 -- spdk/autotest.sh@111 -- # uname -s 00:21:56.594 06:48:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:21:56.594 06:48:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:21:56.594 06:48:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:21:56.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:57.420 Hugepages 00:21:57.420 node hugesize free / total 00:21:57.420 node0 1048576kB 0 / 0 00:21:57.420 node0 2048kB 0 / 0 00:21:57.420 00:21:57.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:21:57.420 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:21:57.420 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:21:57.679 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:21:57.679 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:21:57.679 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:21:57.679 06:48:30 -- spdk/autotest.sh@117 -- # uname -s 00:21:57.679 06:48:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:21:57.679 06:48:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:21:57.679 06:48:30 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:58.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:58.870 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.870 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.870 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.870 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.870 06:48:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:22:00.245 06:48:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:22:00.245 06:48:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:22:00.245 06:48:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:22:00.245 06:48:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:22:00.245 06:48:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:00.245 06:48:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:22:00.245 06:48:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:00.245 06:48:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:00.245 06:48:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:00.245 06:48:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:22:00.245 06:48:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:00.245 06:48:32 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:00.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.503 Waiting for block devices as requested 00:22:00.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.761 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.761 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.761 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:06.017 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:06.017 06:48:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:22:06.017 06:48:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:22:06.017 06:48:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:22:06.017 06:48:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:22:06.017 06:48:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1543 -- # continue 00:22:06.017 06:48:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:22:06.017 06:48:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:22:06.017 06:48:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:22:06.017 06:48:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:22:06.017 06:48:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1543 -- # continue 00:22:06.017 06:48:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:22:06.017 06:48:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:22:06.017 06:48:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:22:06.017 06:48:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:22:06.018 06:48:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:22:06.018 06:48:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:22:06.018 06:48:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1543 -- # continue 00:22:06.018 06:48:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:22:06.018 06:48:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:22:06.018 06:48:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:22:06.018 06:48:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:22:06.018 06:48:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:22:06.018 06:48:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:22:06.018 06:48:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:22:06.018 06:48:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:22:06.018 06:48:38 -- common/autotest_common.sh@1543 -- # continue 00:22:06.018 06:48:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:22:06.018 06:48:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.018 06:48:38 -- common/autotest_common.sh@10 -- # set +x 00:22:06.018 06:48:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:22:06.018 06:48:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.018 06:48:38 -- common/autotest_common.sh@10 -- # set +x 00:22:06.018 06:48:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:06.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:07.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.154 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.411 06:48:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:22:07.411 06:48:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.411 06:48:39 -- common/autotest_common.sh@10 -- # set +x 00:22:07.411 06:48:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:22:07.411 06:48:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:22:07.412 06:48:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:22:07.412 06:48:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:22:07.412 06:48:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:22:07.412 06:48:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:22:07.412 06:48:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:22:07.412 06:48:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:22:07.412 06:48:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:07.412 06:48:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:22:07.412 06:48:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:07.412 06:48:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:07.412 06:48:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:07.412 06:48:39 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:22:07.412 06:48:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:07.412 06:48:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:22:07.412 06:48:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:07.412 06:48:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:22:07.412 06:48:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:07.412 06:48:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:22:07.412 06:48:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:07.412 06:48:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:22:07.412 06:48:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:22:07.412 06:48:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:07.412 06:48:39 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:22:07.412 06:48:39 -- common/autotest_common.sh@1572 -- # return 0 00:22:07.412 06:48:39 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:22:07.412 06:48:39 -- common/autotest_common.sh@1580 -- # return 0 00:22:07.412 06:48:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:22:07.412 06:48:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:22:07.412 06:48:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:07.412 06:48:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:07.412 06:48:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:22:07.412 06:48:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.412 06:48:39 -- common/autotest_common.sh@10 -- # set +x 00:22:07.412 06:48:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:22:07.412 06:48:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:07.412 06:48:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:07.412 06:48:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.412 06:48:39 -- common/autotest_common.sh@10 -- # set +x 00:22:07.412 ************************************ 00:22:07.412 START TEST env 00:22:07.412 ************************************ 00:22:07.412 06:48:39 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:07.412 * Looking for test storage... 00:22:07.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:22:07.412 06:48:39 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.670 06:48:39 env -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.670 06:48:39 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.670 06:48:40 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.670 06:48:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.670 06:48:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.670 06:48:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.670 06:48:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.670 06:48:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.670 06:48:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.670 06:48:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.670 06:48:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.670 06:48:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.670 06:48:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.670 06:48:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.670 06:48:40 env -- scripts/common.sh@344 -- # case "$op" in 00:22:07.670 06:48:40 env -- scripts/common.sh@345 -- # : 1 00:22:07.670 06:48:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.670 06:48:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.670 06:48:40 env -- scripts/common.sh@365 -- # decimal 1 00:22:07.670 06:48:40 env -- scripts/common.sh@353 -- # local d=1 00:22:07.670 06:48:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.670 06:48:40 env -- scripts/common.sh@355 -- # echo 1 00:22:07.670 06:48:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.670 06:48:40 env -- scripts/common.sh@366 -- # decimal 2 00:22:07.670 06:48:40 env -- scripts/common.sh@353 -- # local d=2 00:22:07.670 06:48:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.670 06:48:40 env -- scripts/common.sh@355 -- # echo 2 00:22:07.670 06:48:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.670 06:48:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.670 06:48:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.670 06:48:40 env -- scripts/common.sh@368 -- # return 0 00:22:07.670 06:48:40 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.670 06:48:40 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.670 --rc genhtml_branch_coverage=1 00:22:07.670 --rc genhtml_function_coverage=1 00:22:07.670 --rc genhtml_legend=1 00:22:07.670 --rc geninfo_all_blocks=1 00:22:07.670 --rc geninfo_unexecuted_blocks=1 00:22:07.670 00:22:07.670 ' 00:22:07.670 06:48:40 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.670 --rc genhtml_branch_coverage=1 00:22:07.670 --rc genhtml_function_coverage=1 00:22:07.670 --rc genhtml_legend=1 00:22:07.670 --rc geninfo_all_blocks=1 00:22:07.670 --rc geninfo_unexecuted_blocks=1 00:22:07.670 00:22:07.670 ' 00:22:07.670 06:48:40 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.670 --rc genhtml_branch_coverage=1 00:22:07.670 --rc genhtml_function_coverage=1 00:22:07.671 --rc genhtml_legend=1 00:22:07.671 --rc geninfo_all_blocks=1 00:22:07.671 --rc geninfo_unexecuted_blocks=1 00:22:07.671 00:22:07.671 ' 00:22:07.671 06:48:40 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.671 --rc genhtml_branch_coverage=1 00:22:07.671 --rc genhtml_function_coverage=1 00:22:07.671 --rc genhtml_legend=1 00:22:07.671 --rc geninfo_all_blocks=1 00:22:07.671 --rc geninfo_unexecuted_blocks=1 00:22:07.671 00:22:07.671 ' 00:22:07.671 06:48:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:07.671 06:48:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:07.671 06:48:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.671 06:48:40 env -- common/autotest_common.sh@10 -- # set +x 00:22:07.671 ************************************ 00:22:07.671 START TEST env_memory 00:22:07.671 ************************************ 00:22:07.671 06:48:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:07.671 00:22:07.671 00:22:07.671 CUnit - A unit testing framework for C - Version 2.1-3 00:22:07.671 http://cunit.sourceforge.net/ 00:22:07.671 00:22:07.671 00:22:07.671 Suite: memory 00:22:07.671 Test: alloc and free memory map ...[2024-12-06 06:48:40.207736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:07.671 passed 00:22:07.932 Test: mem map translation ...[2024-12-06 06:48:40.269074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:07.932 [2024-12-06 06:48:40.269195] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:07.932 [2024-12-06 06:48:40.269297] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:07.932 [2024-12-06 06:48:40.269330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:07.932 passed 00:22:07.932 Test: mem map registration ...[2024-12-06 06:48:40.369678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:22:07.932 [2024-12-06 06:48:40.369818] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:22:07.932 passed 00:22:07.932 Test: mem map adjacent registrations ...passed 00:22:07.932 00:22:07.932 Run Summary: Type Total Ran Passed Failed Inactive 00:22:07.932 suites 1 1 n/a 0 0 00:22:07.932 tests 4 4 4 0 0 00:22:07.932 asserts 152 152 152 0 n/a 00:22:07.932 00:22:07.932 Elapsed time = 0.347 seconds 00:22:08.191 00:22:08.191 real 0m0.388s 00:22:08.191 user 0m0.355s 00:22:08.191 sys 0m0.025s 00:22:08.191 06:48:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.191 06:48:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:22:08.191 ************************************ 00:22:08.191 END TEST env_memory 00:22:08.191 ************************************ 00:22:08.191 06:48:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:08.191 06:48:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:08.191 06:48:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.191 06:48:40 env -- common/autotest_common.sh@10 -- # set +x 00:22:08.191 ************************************ 00:22:08.191 START TEST env_vtophys 00:22:08.191 ************************************ 00:22:08.191 06:48:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:08.191 EAL: lib.eal log level changed from notice to debug 00:22:08.191 EAL: Detected lcore 0 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 1 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 2 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 3 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 4 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 5 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 6 as core 0 on socket 0 00:22:08.191 EAL: Detected lcore 7 as core 0 on socket 0 00:22:08.192 EAL: Detected lcore 8 as core 0 on socket 0 00:22:08.192 EAL: Detected lcore 9 as core 0 on socket 0 00:22:08.192 EAL: Maximum logical cores by configuration: 128 00:22:08.192 EAL: Detected CPU lcores: 10 00:22:08.192 EAL: Detected NUMA nodes: 1 00:22:08.192 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:22:08.192 EAL: Detected shared linkage of DPDK 00:22:08.192 EAL: No shared files mode enabled, IPC will be disabled 00:22:08.192 EAL: Selected IOVA mode 'PA' 00:22:08.192 EAL: Probing VFIO support... 00:22:08.192 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:08.192 EAL: VFIO modules not loaded, skipping VFIO support... 00:22:08.192 EAL: Ask a virtual area of 0x2e000 bytes 00:22:08.192 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:08.192 EAL: Setting up physically contiguous memory... 00:22:08.192 EAL: Setting maximum number of open files to 524288 00:22:08.192 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:08.192 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:08.192 EAL: Ask a virtual area of 0x61000 bytes 00:22:08.192 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:08.192 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:08.192 EAL: Ask a virtual area of 0x400000000 bytes 00:22:08.192 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:08.192 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:08.192 EAL: Ask a virtual area of 0x61000 bytes 00:22:08.192 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:08.192 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:08.192 EAL: Ask a virtual area of 0x400000000 bytes 00:22:08.192 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:08.192 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:08.192 EAL: Ask a virtual area of 0x61000 bytes 00:22:08.192 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:08.192 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:08.192 EAL: Ask a virtual area of 0x400000000 bytes 00:22:08.192 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:08.192 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:08.192 EAL: Ask a virtual area of 0x61000 bytes 00:22:08.192 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:08.192 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:08.192 EAL: Ask a virtual area of 0x400000000 bytes 00:22:08.192 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:08.192 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:08.192 EAL: Hugepages will be freed exactly as allocated. 00:22:08.192 EAL: No shared files mode enabled, IPC is disabled 00:22:08.192 EAL: No shared files mode enabled, IPC is disabled 00:22:08.192 EAL: TSC frequency is ~2200000 KHz 00:22:08.192 EAL: Main lcore 0 is ready (tid=7fd245b0ca40;cpuset=[0]) 00:22:08.192 EAL: Trying to obtain current memory policy. 00:22:08.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.192 EAL: Restoring previous memory policy: 0 00:22:08.192 EAL: request: mp_malloc_sync 00:22:08.192 EAL: No shared files mode enabled, IPC is disabled 00:22:08.192 EAL: Heap on socket 0 was expanded by 2MB 00:22:08.192 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:08.192 EAL: No PCI address specified using 'addr=' in: bus=pci 00:22:08.192 EAL: Mem event callback 'spdk:(nil)' registered 00:22:08.192 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:22:08.451 00:22:08.451 00:22:08.451 CUnit - A unit testing framework for C - Version 2.1-3 00:22:08.451 http://cunit.sourceforge.net/ 00:22:08.451 00:22:08.451 00:22:08.451 Suite: components_suite 00:22:08.710 Test: vtophys_malloc_test ...passed 00:22:08.710 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:22:08.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.710 EAL: Restoring previous memory policy: 4 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was expanded by 4MB 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was shrunk by 4MB 00:22:08.710 EAL: Trying to obtain current memory policy. 00:22:08.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.710 EAL: Restoring previous memory policy: 4 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was expanded by 6MB 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was shrunk by 6MB 00:22:08.710 EAL: Trying to obtain current memory policy. 00:22:08.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.710 EAL: Restoring previous memory policy: 4 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was expanded by 10MB 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was shrunk by 10MB 00:22:08.710 EAL: Trying to obtain current memory policy. 00:22:08.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.710 EAL: Restoring previous memory policy: 4 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was expanded by 18MB 00:22:08.710 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.710 EAL: request: mp_malloc_sync 00:22:08.710 EAL: No shared files mode enabled, IPC is disabled 00:22:08.710 EAL: Heap on socket 0 was shrunk by 18MB 00:22:08.969 EAL: Trying to obtain current memory policy. 00:22:08.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.969 EAL: Restoring previous memory policy: 4 00:22:08.969 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.969 EAL: request: mp_malloc_sync 00:22:08.969 EAL: No shared files mode enabled, IPC is disabled 00:22:08.969 EAL: Heap on socket 0 was expanded by 34MB 00:22:08.969 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.969 EAL: request: mp_malloc_sync 00:22:08.969 EAL: No shared files mode enabled, IPC is disabled 00:22:08.969 EAL: Heap on socket 0 was shrunk by 34MB 00:22:08.969 EAL: Trying to obtain current memory policy. 00:22:08.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:08.969 EAL: Restoring previous memory policy: 4 00:22:08.969 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.969 EAL: request: mp_malloc_sync 00:22:08.969 EAL: No shared files mode enabled, IPC is disabled 00:22:08.969 EAL: Heap on socket 0 was expanded by 66MB 00:22:08.969 EAL: Calling mem event callback 'spdk:(nil)' 00:22:08.969 EAL: request: mp_malloc_sync 00:22:08.969 EAL: No shared files mode enabled, IPC is disabled 00:22:08.969 EAL: Heap on socket 0 was shrunk by 66MB 00:22:09.227 EAL: Trying to obtain current memory policy. 00:22:09.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:09.227 EAL: Restoring previous memory policy: 4 00:22:09.227 EAL: Calling mem event callback 'spdk:(nil)' 00:22:09.227 EAL: request: mp_malloc_sync 00:22:09.227 EAL: No shared files mode enabled, IPC is disabled 00:22:09.227 EAL: Heap on socket 0 was expanded by 130MB 00:22:09.486 EAL: Calling mem event callback 'spdk:(nil)' 00:22:09.486 EAL: request: mp_malloc_sync 00:22:09.486 EAL: No shared files mode enabled, IPC is disabled 00:22:09.486 EAL: Heap on socket 0 was shrunk by 130MB 00:22:09.486 EAL: Trying to obtain current memory policy. 00:22:09.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:09.486 EAL: Restoring previous memory policy: 4 00:22:09.486 EAL: Calling mem event callback 'spdk:(nil)' 00:22:09.486 EAL: request: mp_malloc_sync 00:22:09.486 EAL: No shared files mode enabled, IPC is disabled 00:22:09.486 EAL: Heap on socket 0 was expanded by 258MB 00:22:10.054 EAL: Calling mem event callback 'spdk:(nil)' 00:22:10.054 EAL: request: mp_malloc_sync 00:22:10.054 EAL: No shared files mode enabled, IPC is disabled 00:22:10.054 EAL: Heap on socket 0 was shrunk by 258MB 00:22:10.313 EAL: Trying to obtain current memory policy. 00:22:10.313 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:10.572 EAL: Restoring previous memory policy: 4 00:22:10.572 EAL: Calling mem event callback 'spdk:(nil)' 00:22:10.572 EAL: request: mp_malloc_sync 00:22:10.572 EAL: No shared files mode enabled, IPC is disabled 00:22:10.572 EAL: Heap on socket 0 was expanded by 514MB 00:22:11.140 EAL: Calling mem event callback 'spdk:(nil)' 00:22:11.398 EAL: request: mp_malloc_sync 00:22:11.398 EAL: No shared files mode enabled, IPC is disabled 00:22:11.398 EAL: Heap on socket 0 was shrunk by 514MB 00:22:11.966 EAL: Trying to obtain current memory policy. 00:22:11.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:12.224 EAL: Restoring previous memory policy: 4 00:22:12.224 EAL: Calling mem event callback 'spdk:(nil)' 00:22:12.224 EAL: request: mp_malloc_sync 00:22:12.224 EAL: No shared files mode enabled, IPC is disabled 00:22:12.224 EAL: Heap on socket 0 was expanded by 1026MB 00:22:14.124 EAL: Calling mem event callback 'spdk:(nil)' 00:22:14.124 EAL: request: mp_malloc_sync 00:22:14.124 EAL: No shared files mode enabled, IPC is disabled 00:22:14.124 EAL: Heap on socket 0 was shrunk by 1026MB 00:22:15.499 passed 00:22:15.499 00:22:15.499 Run Summary: Type Total Ran Passed Failed Inactive 00:22:15.499 suites 1 1 n/a 0 0 00:22:15.499 tests 2 2 2 0 0 00:22:15.499 asserts 5698 5698 5698 0 n/a 00:22:15.499 00:22:15.499 Elapsed time = 6.920 seconds 00:22:15.499 EAL: Calling mem event callback 'spdk:(nil)' 00:22:15.499 EAL: request: mp_malloc_sync 00:22:15.499 EAL: No shared files mode enabled, IPC is disabled 00:22:15.499 EAL: Heap on socket 0 was shrunk by 2MB 00:22:15.499 EAL: No shared files mode enabled, IPC is disabled 00:22:15.500 EAL: No shared files mode enabled, IPC is disabled 00:22:15.500 EAL: No shared files mode enabled, IPC is disabled 00:22:15.500 00:22:15.500 real 0m7.277s 00:22:15.500 user 0m6.430s 00:22:15.500 sys 0m0.688s 00:22:15.500 06:48:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.500 ************************************ 00:22:15.500 06:48:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:22:15.500 END TEST env_vtophys 00:22:15.500 ************************************ 00:22:15.500 06:48:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:15.500 06:48:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:15.500 06:48:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.500 06:48:47 env -- common/autotest_common.sh@10 -- # set +x 00:22:15.500 ************************************ 00:22:15.500 START TEST env_pci 00:22:15.500 ************************************ 00:22:15.500 06:48:47 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:15.500 00:22:15.500 00:22:15.500 CUnit - A unit testing framework for C - Version 2.1-3 00:22:15.500 http://cunit.sourceforge.net/ 00:22:15.500 00:22:15.500 00:22:15.500 Suite: pci 00:22:15.500 Test: pci_hook ...[2024-12-06 06:48:47.918551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57787 has claimed it 00:22:15.500 passed 00:22:15.500 00:22:15.500 Run Summary: Type Total Ran Passed Failed Inactive 00:22:15.500 suites 1 1 n/a 0 0 00:22:15.500 tests 1 1 1 0 0 00:22:15.500 asserts 25 25 25 0 n/a 00:22:15.500 00:22:15.500 Elapsed time = 0.005 seconds 00:22:15.500 EAL: Cannot find device (10000:00:01.0) 00:22:15.500 EAL: Failed to attach device on primary process 00:22:15.500 00:22:15.500 real 0m0.059s 00:22:15.500 user 0m0.029s 00:22:15.500 sys 0m0.029s 00:22:15.500 06:48:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.500 06:48:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:22:15.500 ************************************ 00:22:15.500 END TEST env_pci 00:22:15.500 ************************************ 00:22:15.500 06:48:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:22:15.500 06:48:47 env -- env/env.sh@15 -- # uname 00:22:15.500 06:48:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:22:15.500 06:48:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:22:15.500 06:48:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:15.500 06:48:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:15.500 06:48:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.500 06:48:47 env -- common/autotest_common.sh@10 -- # set +x 00:22:15.500 ************************************ 00:22:15.500 START TEST env_dpdk_post_init 00:22:15.500 ************************************ 00:22:15.500 06:48:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:15.500 EAL: Detected CPU lcores: 10 00:22:15.500 EAL: Detected NUMA nodes: 1 00:22:15.500 EAL: Detected shared linkage of DPDK 00:22:15.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:15.758 EAL: Selected IOVA mode 'PA' 00:22:15.759 TELEMETRY: No legacy callbacks, legacy socket not created 00:22:15.759 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:22:15.759 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:22:15.759 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:22:15.759 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:22:15.759 Starting DPDK initialization... 00:22:15.759 Starting SPDK post initialization... 00:22:15.759 SPDK NVMe probe 00:22:15.759 Attaching to 0000:00:10.0 00:22:15.759 Attaching to 0000:00:11.0 00:22:15.759 Attaching to 0000:00:12.0 00:22:15.759 Attaching to 0000:00:13.0 00:22:15.759 Attached to 0000:00:10.0 00:22:15.759 Attached to 0000:00:11.0 00:22:15.759 Attached to 0000:00:13.0 00:22:15.759 Attached to 0000:00:12.0 00:22:15.759 Cleaning up... 00:22:15.759 00:22:15.759 real 0m0.305s 00:22:15.759 user 0m0.117s 00:22:15.759 sys 0m0.089s 00:22:15.759 06:48:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.759 06:48:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.759 ************************************ 00:22:15.759 END TEST env_dpdk_post_init 00:22:15.759 ************************************ 00:22:15.759 06:48:48 env -- env/env.sh@26 -- # uname 00:22:16.016 06:48:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:22:16.016 06:48:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:16.016 06:48:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.016 06:48:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.016 06:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:22:16.016 ************************************ 00:22:16.016 START TEST env_mem_callbacks 00:22:16.016 ************************************ 00:22:16.016 06:48:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:16.016 EAL: Detected CPU lcores: 10 00:22:16.016 EAL: Detected NUMA nodes: 1 00:22:16.016 EAL: Detected shared linkage of DPDK 00:22:16.016 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:16.016 EAL: Selected IOVA mode 'PA' 00:22:16.016 TELEMETRY: No legacy callbacks, legacy socket not created 00:22:16.016 00:22:16.016 00:22:16.016 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.016 http://cunit.sourceforge.net/ 00:22:16.016 00:22:16.016 00:22:16.016 Suite: memory 00:22:16.016 Test: test ... 00:22:16.016 register 0x200000200000 2097152 00:22:16.016 malloc 3145728 00:22:16.016 register 0x200000400000 4194304 00:22:16.016 buf 0x2000004fffc0 len 3145728 PASSED 00:22:16.016 malloc 64 00:22:16.016 buf 0x2000004ffec0 len 64 PASSED 00:22:16.016 malloc 4194304 00:22:16.016 register 0x200000800000 6291456 00:22:16.016 buf 0x2000009fffc0 len 4194304 PASSED 00:22:16.016 free 0x2000004fffc0 3145728 00:22:16.016 free 0x2000004ffec0 64 00:22:16.016 unregister 0x200000400000 4194304 PASSED 00:22:16.016 free 0x2000009fffc0 4194304 00:22:16.016 unregister 0x200000800000 6291456 PASSED 00:22:16.016 malloc 8388608 00:22:16.016 register 0x200000400000 10485760 00:22:16.016 buf 0x2000005fffc0 len 8388608 PASSED 00:22:16.016 free 0x2000005fffc0 8388608 00:22:16.016 unregister 0x200000400000 10485760 PASSED 00:22:16.016 passed 00:22:16.016 00:22:16.016 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.016 suites 1 1 n/a 0 0 00:22:16.016 tests 1 1 1 0 0 00:22:16.016 asserts 15 15 15 0 n/a 00:22:16.016 00:22:16.016 Elapsed time = 0.062 seconds 00:22:16.273 00:22:16.273 real 0m0.255s 00:22:16.273 user 0m0.094s 00:22:16.273 sys 0m0.058s 00:22:16.273 06:48:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.273 06:48:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:22:16.273 ************************************ 00:22:16.273 END TEST env_mem_callbacks 00:22:16.273 ************************************ 00:22:16.273 00:22:16.273 real 0m8.729s 00:22:16.273 user 0m7.247s 00:22:16.273 sys 0m1.110s 00:22:16.273 06:48:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.273 06:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:22:16.273 ************************************ 00:22:16.273 END TEST env 00:22:16.273 ************************************ 00:22:16.273 06:48:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:16.273 06:48:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.273 06:48:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.273 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:22:16.273 ************************************ 00:22:16.273 START TEST rpc 00:22:16.273 ************************************ 00:22:16.273 06:48:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:16.273 * Looking for test storage... 00:22:16.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:22:16.273 06:48:48 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.273 06:48:48 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.273 06:48:48 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.532 06:48:48 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.532 06:48:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.532 06:48:48 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.532 06:48:48 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.532 06:48:48 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.532 06:48:48 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.532 06:48:48 rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:16.532 06:48:48 rpc -- scripts/common.sh@345 -- # : 1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.532 06:48:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.532 06:48:48 rpc -- scripts/common.sh@365 -- # decimal 1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@353 -- # local d=1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.532 06:48:48 rpc -- scripts/common.sh@355 -- # echo 1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.532 06:48:48 rpc -- scripts/common.sh@366 -- # decimal 2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@353 -- # local d=2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.532 06:48:48 rpc -- scripts/common.sh@355 -- # echo 2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.532 06:48:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.532 06:48:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.532 06:48:48 rpc -- scripts/common.sh@368 -- # return 0 00:22:16.532 06:48:48 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.532 06:48:48 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.533 --rc genhtml_branch_coverage=1 00:22:16.533 --rc genhtml_function_coverage=1 00:22:16.533 --rc genhtml_legend=1 00:22:16.533 --rc geninfo_all_blocks=1 00:22:16.533 --rc geninfo_unexecuted_blocks=1 00:22:16.533 00:22:16.533 ' 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.533 --rc genhtml_branch_coverage=1 00:22:16.533 --rc genhtml_function_coverage=1 00:22:16.533 --rc genhtml_legend=1 00:22:16.533 --rc geninfo_all_blocks=1 00:22:16.533 --rc geninfo_unexecuted_blocks=1 00:22:16.533 00:22:16.533 ' 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.533 --rc genhtml_branch_coverage=1 00:22:16.533 --rc genhtml_function_coverage=1 00:22:16.533 --rc genhtml_legend=1 00:22:16.533 --rc geninfo_all_blocks=1 00:22:16.533 --rc geninfo_unexecuted_blocks=1 00:22:16.533 00:22:16.533 ' 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.533 --rc genhtml_branch_coverage=1 00:22:16.533 --rc genhtml_function_coverage=1 00:22:16.533 --rc genhtml_legend=1 00:22:16.533 --rc geninfo_all_blocks=1 00:22:16.533 --rc geninfo_unexecuted_blocks=1 00:22:16.533 00:22:16.533 ' 00:22:16.533 06:48:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57914 00:22:16.533 06:48:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:22:16.533 06:48:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:16.533 06:48:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57914 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@835 -- # '[' -z 57914 ']' 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.533 06:48:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:16.533 [2024-12-06 06:48:49.005343] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:16.533 [2024-12-06 06:48:49.005513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:22:16.791 [2024-12-06 06:48:49.181553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.791 [2024-12-06 06:48:49.284110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:22:16.791 [2024-12-06 06:48:49.284207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57914' to capture a snapshot of events at runtime. 00:22:16.791 [2024-12-06 06:48:49.284239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.791 [2024-12-06 06:48:49.284265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.791 [2024-12-06 06:48:49.284280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57914 for offline analysis/debug. 00:22:16.791 [2024-12-06 06:48:49.285509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.763 06:48:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.763 06:48:50 rpc -- common/autotest_common.sh@868 -- # return 0 00:22:17.763 06:48:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:17.763 06:48:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:17.763 06:48:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:22:17.763 06:48:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:22:17.763 06:48:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.763 06:48:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.763 06:48:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:17.763 ************************************ 00:22:17.763 START TEST rpc_integrity 00:22:17.763 ************************************ 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.763 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:22:17.763 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:17.764 { 00:22:17.764 "name": "Malloc0", 00:22:17.764 "aliases": [ 00:22:17.764 "2a1049f7-6f5f-4ae1-8cd0-072d43ad34ee" 00:22:17.764 ], 00:22:17.764 "product_name": "Malloc disk", 00:22:17.764 "block_size": 512, 00:22:17.764 "num_blocks": 16384, 00:22:17.764 "uuid": "2a1049f7-6f5f-4ae1-8cd0-072d43ad34ee", 00:22:17.764 "assigned_rate_limits": { 00:22:17.764 "rw_ios_per_sec": 0, 00:22:17.764 "rw_mbytes_per_sec": 0, 00:22:17.764 "r_mbytes_per_sec": 0, 00:22:17.764 "w_mbytes_per_sec": 0 00:22:17.764 }, 00:22:17.764 "claimed": false, 00:22:17.764 "zoned": false, 00:22:17.764 "supported_io_types": { 00:22:17.764 "read": true, 00:22:17.764 "write": true, 00:22:17.764 "unmap": true, 00:22:17.764 "flush": true, 00:22:17.764 "reset": true, 00:22:17.764 "nvme_admin": false, 00:22:17.764 "nvme_io": false, 00:22:17.764 "nvme_io_md": false, 00:22:17.764 "write_zeroes": true, 00:22:17.764 "zcopy": true, 00:22:17.764 "get_zone_info": false, 00:22:17.764 "zone_management": false, 00:22:17.764 "zone_append": false, 00:22:17.764 "compare": false, 00:22:17.764 "compare_and_write": false, 00:22:17.764 "abort": true, 00:22:17.764 "seek_hole": false, 00:22:17.764 "seek_data": false, 00:22:17.764 "copy": true, 00:22:17.764 "nvme_iov_md": false 00:22:17.764 }, 00:22:17.764 "memory_domains": [ 00:22:17.764 { 00:22:17.764 "dma_device_id": "system", 00:22:17.764 "dma_device_type": 1 00:22:17.764 }, 00:22:17.764 { 00:22:17.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.764 "dma_device_type": 2 00:22:17.764 } 00:22:17.764 ], 00:22:17.764 "driver_specific": {} 00:22:17.764 } 00:22:17.764 ]' 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.764 [2024-12-06 06:48:50.221101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:22:17.764 [2024-12-06 06:48:50.221193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.764 [2024-12-06 06:48:50.221237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:17.764 [2024-12-06 06:48:50.221258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.764 [2024-12-06 06:48:50.224183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.764 [2024-12-06 06:48:50.224244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:17.764 Passthru0 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:17.764 { 00:22:17.764 "name": "Malloc0", 00:22:17.764 "aliases": [ 00:22:17.764 "2a1049f7-6f5f-4ae1-8cd0-072d43ad34ee" 00:22:17.764 ], 00:22:17.764 "product_name": "Malloc disk", 00:22:17.764 "block_size": 512, 00:22:17.764 "num_blocks": 16384, 00:22:17.764 "uuid": "2a1049f7-6f5f-4ae1-8cd0-072d43ad34ee", 00:22:17.764 "assigned_rate_limits": { 00:22:17.764 "rw_ios_per_sec": 0, 00:22:17.764 "rw_mbytes_per_sec": 0, 00:22:17.764 "r_mbytes_per_sec": 0, 00:22:17.764 "w_mbytes_per_sec": 0 00:22:17.764 }, 00:22:17.764 "claimed": true, 00:22:17.764 "claim_type": "exclusive_write", 00:22:17.764 "zoned": false, 00:22:17.764 "supported_io_types": { 00:22:17.764 "read": true, 00:22:17.764 "write": true, 00:22:17.764 "unmap": true, 00:22:17.764 "flush": true, 00:22:17.764 "reset": true, 00:22:17.764 "nvme_admin": false, 00:22:17.764 "nvme_io": false, 00:22:17.764 "nvme_io_md": false, 00:22:17.764 "write_zeroes": true, 00:22:17.764 "zcopy": true, 00:22:17.764 "get_zone_info": false, 00:22:17.764 "zone_management": false, 00:22:17.764 "zone_append": false, 00:22:17.764 "compare": false, 00:22:17.764 "compare_and_write": false, 00:22:17.764 "abort": true, 00:22:17.764 "seek_hole": false, 00:22:17.764 "seek_data": false, 00:22:17.764 "copy": true, 00:22:17.764 "nvme_iov_md": false 00:22:17.764 }, 00:22:17.764 "memory_domains": [ 00:22:17.764 { 00:22:17.764 "dma_device_id": "system", 00:22:17.764 "dma_device_type": 1 00:22:17.764 }, 00:22:17.764 { 00:22:17.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.764 "dma_device_type": 2 00:22:17.764 } 00:22:17.764 ], 00:22:17.764 "driver_specific": {} 00:22:17.764 }, 00:22:17.764 { 00:22:17.764 "name": "Passthru0", 00:22:17.764 "aliases": [ 00:22:17.764 "05a7043f-7235-5e6f-a135-f35c148951fc" 00:22:17.764 ], 00:22:17.764 "product_name": "passthru", 00:22:17.764 "block_size": 512, 00:22:17.764 "num_blocks": 16384, 00:22:17.764 "uuid": "05a7043f-7235-5e6f-a135-f35c148951fc", 00:22:17.764 "assigned_rate_limits": { 00:22:17.764 "rw_ios_per_sec": 0, 00:22:17.764 "rw_mbytes_per_sec": 0, 00:22:17.764 "r_mbytes_per_sec": 0, 00:22:17.764 "w_mbytes_per_sec": 0 00:22:17.764 }, 00:22:17.764 "claimed": false, 00:22:17.764 "zoned": false, 00:22:17.764 "supported_io_types": { 00:22:17.764 "read": true, 00:22:17.764 "write": true, 00:22:17.764 "unmap": true, 00:22:17.764 "flush": true, 00:22:17.764 "reset": true, 00:22:17.764 "nvme_admin": false, 00:22:17.764 "nvme_io": false, 00:22:17.764 "nvme_io_md": false, 00:22:17.764 "write_zeroes": true, 00:22:17.764 "zcopy": true, 00:22:17.764 "get_zone_info": false, 00:22:17.764 "zone_management": false, 00:22:17.764 "zone_append": false, 00:22:17.764 "compare": false, 00:22:17.764 "compare_and_write": false, 00:22:17.764 "abort": true, 00:22:17.764 "seek_hole": false, 00:22:17.764 "seek_data": false, 00:22:17.764 "copy": true, 00:22:17.764 "nvme_iov_md": false 00:22:17.764 }, 00:22:17.764 "memory_domains": [ 00:22:17.764 { 00:22:17.764 "dma_device_id": "system", 00:22:17.764 "dma_device_type": 1 00:22:17.764 }, 00:22:17.764 { 00:22:17.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.764 "dma_device_type": 2 00:22:17.764 } 00:22:17.764 ], 00:22:17.764 "driver_specific": { 00:22:17.764 "passthru": { 00:22:17.764 "name": "Passthru0", 00:22:17.764 "base_bdev_name": "Malloc0" 00:22:17.764 } 00:22:17.764 } 00:22:17.764 } 00:22:17.764 ]' 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.764 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.764 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:18.050 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:22:18.050 06:48:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:18.050 00:22:18.050 real 0m0.353s 00:22:18.050 user 0m0.210s 00:22:18.050 sys 0m0.046s 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 ************************************ 00:22:18.050 END TEST rpc_integrity 00:22:18.050 ************************************ 00:22:18.050 06:48:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:22:18.050 06:48:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:18.050 06:48:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.050 06:48:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 ************************************ 00:22:18.050 START TEST rpc_plugins 00:22:18.050 ************************************ 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:22:18.050 { 00:22:18.050 "name": "Malloc1", 00:22:18.050 "aliases": [ 00:22:18.050 "94fd5911-2dd2-47d6-9f18-1192f3c3a09b" 00:22:18.050 ], 00:22:18.050 "product_name": "Malloc disk", 00:22:18.050 "block_size": 4096, 00:22:18.050 "num_blocks": 256, 00:22:18.050 "uuid": "94fd5911-2dd2-47d6-9f18-1192f3c3a09b", 00:22:18.050 "assigned_rate_limits": { 00:22:18.050 "rw_ios_per_sec": 0, 00:22:18.050 "rw_mbytes_per_sec": 0, 00:22:18.050 "r_mbytes_per_sec": 0, 00:22:18.050 "w_mbytes_per_sec": 0 00:22:18.050 }, 00:22:18.050 "claimed": false, 00:22:18.050 "zoned": false, 00:22:18.050 "supported_io_types": { 00:22:18.050 "read": true, 00:22:18.050 "write": true, 00:22:18.050 "unmap": true, 00:22:18.050 "flush": true, 00:22:18.050 "reset": true, 00:22:18.050 "nvme_admin": false, 00:22:18.050 "nvme_io": false, 00:22:18.050 "nvme_io_md": false, 00:22:18.050 "write_zeroes": true, 00:22:18.050 "zcopy": true, 00:22:18.050 "get_zone_info": false, 00:22:18.050 "zone_management": false, 00:22:18.050 "zone_append": false, 00:22:18.050 "compare": false, 00:22:18.050 "compare_and_write": false, 00:22:18.050 "abort": true, 00:22:18.050 "seek_hole": false, 00:22:18.050 "seek_data": false, 00:22:18.050 "copy": true, 00:22:18.050 "nvme_iov_md": false 00:22:18.050 }, 00:22:18.050 "memory_domains": [ 00:22:18.050 { 00:22:18.050 "dma_device_id": "system", 00:22:18.050 "dma_device_type": 1 00:22:18.050 }, 00:22:18.050 { 00:22:18.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.050 "dma_device_type": 2 00:22:18.050 } 00:22:18.050 ], 00:22:18.050 "driver_specific": {} 00:22:18.050 } 00:22:18.050 ]' 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:22:18.050 06:48:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:22:18.050 00:22:18.050 real 0m0.152s 00:22:18.050 user 0m0.102s 00:22:18.050 sys 0m0.013s 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.050 ************************************ 00:22:18.050 END TEST rpc_plugins 00:22:18.050 ************************************ 00:22:18.050 06:48:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:18.308 06:48:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:22:18.308 06:48:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:18.308 06:48:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.308 06:48:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.308 ************************************ 00:22:18.308 START TEST rpc_trace_cmd_test 00:22:18.308 ************************************ 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.308 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:22:18.308 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57914", 00:22:18.308 "tpoint_group_mask": "0x8", 00:22:18.308 "iscsi_conn": { 00:22:18.308 "mask": "0x2", 00:22:18.308 "tpoint_mask": "0x0" 00:22:18.308 }, 00:22:18.308 "scsi": { 00:22:18.308 "mask": "0x4", 00:22:18.308 "tpoint_mask": "0x0" 00:22:18.308 }, 00:22:18.308 "bdev": { 00:22:18.308 "mask": "0x8", 00:22:18.308 "tpoint_mask": "0xffffffffffffffff" 00:22:18.308 }, 00:22:18.308 "nvmf_rdma": { 00:22:18.308 "mask": "0x10", 00:22:18.308 "tpoint_mask": "0x0" 00:22:18.308 }, 00:22:18.308 "nvmf_tcp": { 00:22:18.308 "mask": "0x20", 00:22:18.308 "tpoint_mask": "0x0" 00:22:18.308 }, 00:22:18.308 "ftl": { 00:22:18.308 "mask": "0x40", 00:22:18.308 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "blobfs": { 00:22:18.309 "mask": "0x80", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "dsa": { 00:22:18.309 "mask": "0x200", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "thread": { 00:22:18.309 "mask": "0x400", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "nvme_pcie": { 00:22:18.309 "mask": "0x800", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "iaa": { 00:22:18.309 "mask": "0x1000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "nvme_tcp": { 00:22:18.309 "mask": "0x2000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "bdev_nvme": { 00:22:18.309 "mask": "0x4000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "sock": { 00:22:18.309 "mask": "0x8000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "blob": { 00:22:18.309 "mask": "0x10000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "bdev_raid": { 00:22:18.309 "mask": "0x20000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 }, 00:22:18.309 "scheduler": { 00:22:18.309 "mask": "0x40000", 00:22:18.309 "tpoint_mask": "0x0" 00:22:18.309 } 00:22:18.309 }' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:22:18.309 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:22:18.567 06:48:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:22:18.567 00:22:18.567 real 0m0.265s 00:22:18.567 user 0m0.230s 00:22:18.567 sys 0m0.025s 00:22:18.567 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.567 06:48:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 ************************************ 00:22:18.568 END TEST rpc_trace_cmd_test 00:22:18.568 ************************************ 00:22:18.568 06:48:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:22:18.568 06:48:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:22:18.568 06:48:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:22:18.568 06:48:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:18.568 06:48:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.568 06:48:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 ************************************ 00:22:18.568 START TEST rpc_daemon_integrity 00:22:18.568 ************************************ 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:18.568 06:48:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:18.568 { 00:22:18.568 "name": "Malloc2", 00:22:18.568 "aliases": [ 00:22:18.568 "e428322d-08ec-4ee9-81ec-e2a6396bff5e" 00:22:18.568 ], 00:22:18.568 "product_name": "Malloc disk", 00:22:18.568 "block_size": 512, 00:22:18.568 "num_blocks": 16384, 00:22:18.568 "uuid": "e428322d-08ec-4ee9-81ec-e2a6396bff5e", 00:22:18.568 "assigned_rate_limits": { 00:22:18.568 "rw_ios_per_sec": 0, 00:22:18.568 "rw_mbytes_per_sec": 0, 00:22:18.568 "r_mbytes_per_sec": 0, 00:22:18.568 "w_mbytes_per_sec": 0 00:22:18.568 }, 00:22:18.568 "claimed": false, 00:22:18.568 "zoned": false, 00:22:18.568 "supported_io_types": { 00:22:18.568 "read": true, 00:22:18.568 "write": true, 00:22:18.568 "unmap": true, 00:22:18.568 "flush": true, 00:22:18.568 "reset": true, 00:22:18.568 "nvme_admin": false, 00:22:18.568 "nvme_io": false, 00:22:18.568 "nvme_io_md": false, 00:22:18.568 "write_zeroes": true, 00:22:18.568 "zcopy": true, 00:22:18.568 "get_zone_info": false, 00:22:18.568 "zone_management": false, 00:22:18.568 "zone_append": false, 00:22:18.568 "compare": false, 00:22:18.568 "compare_and_write": false, 00:22:18.568 "abort": true, 00:22:18.568 "seek_hole": false, 00:22:18.568 "seek_data": false, 00:22:18.568 "copy": true, 00:22:18.568 "nvme_iov_md": false 00:22:18.568 }, 00:22:18.568 "memory_domains": [ 00:22:18.568 { 00:22:18.568 "dma_device_id": "system", 00:22:18.568 "dma_device_type": 1 00:22:18.568 }, 00:22:18.568 { 00:22:18.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.568 "dma_device_type": 2 00:22:18.568 } 00:22:18.568 ], 00:22:18.568 "driver_specific": {} 00:22:18.568 } 00:22:18.568 ]' 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 [2024-12-06 06:48:51.107765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:22:18.568 [2024-12-06 06:48:51.107838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.568 [2024-12-06 06:48:51.107871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:18.568 [2024-12-06 06:48:51.107898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.568 [2024-12-06 06:48:51.110572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.568 [2024-12-06 06:48:51.110624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:18.568 Passthru0 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:18.568 { 00:22:18.568 "name": "Malloc2", 00:22:18.568 "aliases": [ 00:22:18.568 "e428322d-08ec-4ee9-81ec-e2a6396bff5e" 00:22:18.568 ], 00:22:18.568 "product_name": "Malloc disk", 00:22:18.568 "block_size": 512, 00:22:18.568 "num_blocks": 16384, 00:22:18.568 "uuid": "e428322d-08ec-4ee9-81ec-e2a6396bff5e", 00:22:18.568 "assigned_rate_limits": { 00:22:18.568 "rw_ios_per_sec": 0, 00:22:18.568 "rw_mbytes_per_sec": 0, 00:22:18.568 "r_mbytes_per_sec": 0, 00:22:18.568 "w_mbytes_per_sec": 0 00:22:18.568 }, 00:22:18.568 "claimed": true, 00:22:18.568 "claim_type": "exclusive_write", 00:22:18.568 "zoned": false, 00:22:18.568 "supported_io_types": { 00:22:18.568 "read": true, 00:22:18.568 "write": true, 00:22:18.568 "unmap": true, 00:22:18.568 "flush": true, 00:22:18.568 "reset": true, 00:22:18.568 "nvme_admin": false, 00:22:18.568 "nvme_io": false, 00:22:18.568 "nvme_io_md": false, 00:22:18.568 "write_zeroes": true, 00:22:18.568 "zcopy": true, 00:22:18.568 "get_zone_info": false, 00:22:18.568 "zone_management": false, 00:22:18.568 "zone_append": false, 00:22:18.568 "compare": false, 00:22:18.568 "compare_and_write": false, 00:22:18.568 "abort": true, 00:22:18.568 "seek_hole": false, 00:22:18.568 "seek_data": false, 00:22:18.568 "copy": true, 00:22:18.568 "nvme_iov_md": false 00:22:18.568 }, 00:22:18.568 "memory_domains": [ 00:22:18.568 { 00:22:18.568 "dma_device_id": "system", 00:22:18.568 "dma_device_type": 1 00:22:18.568 }, 00:22:18.568 { 00:22:18.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.568 "dma_device_type": 2 00:22:18.568 } 00:22:18.568 ], 00:22:18.568 "driver_specific": {} 00:22:18.568 }, 00:22:18.568 { 00:22:18.568 "name": "Passthru0", 00:22:18.568 "aliases": [ 00:22:18.568 "5407f370-2754-544d-8381-cef8833bc92c" 00:22:18.568 ], 00:22:18.568 "product_name": "passthru", 00:22:18.568 "block_size": 512, 00:22:18.568 "num_blocks": 16384, 00:22:18.568 "uuid": "5407f370-2754-544d-8381-cef8833bc92c", 00:22:18.568 "assigned_rate_limits": { 00:22:18.568 "rw_ios_per_sec": 0, 00:22:18.568 "rw_mbytes_per_sec": 0, 00:22:18.568 "r_mbytes_per_sec": 0, 00:22:18.568 "w_mbytes_per_sec": 0 00:22:18.568 }, 00:22:18.568 "claimed": false, 00:22:18.568 "zoned": false, 00:22:18.568 "supported_io_types": { 00:22:18.568 "read": true, 00:22:18.568 "write": true, 00:22:18.568 "unmap": true, 00:22:18.568 "flush": true, 00:22:18.568 "reset": true, 00:22:18.568 "nvme_admin": false, 00:22:18.568 "nvme_io": false, 00:22:18.568 "nvme_io_md": false, 00:22:18.568 "write_zeroes": true, 00:22:18.568 "zcopy": true, 00:22:18.568 "get_zone_info": false, 00:22:18.568 "zone_management": false, 00:22:18.568 "zone_append": false, 00:22:18.568 "compare": false, 00:22:18.568 "compare_and_write": false, 00:22:18.568 "abort": true, 00:22:18.568 "seek_hole": false, 00:22:18.568 "seek_data": false, 00:22:18.568 "copy": true, 00:22:18.568 "nvme_iov_md": false 00:22:18.568 }, 00:22:18.568 "memory_domains": [ 00:22:18.568 { 00:22:18.568 "dma_device_id": "system", 00:22:18.568 "dma_device_type": 1 00:22:18.568 }, 00:22:18.568 { 00:22:18.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.568 "dma_device_type": 2 00:22:18.568 } 00:22:18.568 ], 00:22:18.568 "driver_specific": { 00:22:18.568 "passthru": { 00:22:18.568 "name": "Passthru0", 00:22:18.568 "base_bdev_name": "Malloc2" 00:22:18.568 } 00:22:18.568 } 00:22:18.568 } 00:22:18.568 ]' 00:22:18.568 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:18.826 00:22:18.826 real 0m0.331s 00:22:18.826 user 0m0.205s 00:22:18.826 sys 0m0.039s 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.826 06:48:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 ************************************ 00:22:18.826 END TEST rpc_daemon_integrity 00:22:18.826 ************************************ 00:22:18.826 06:48:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:18.826 06:48:51 rpc -- rpc/rpc.sh@84 -- # killprocess 57914 00:22:18.826 06:48:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 57914 ']' 00:22:18.826 06:48:51 rpc -- common/autotest_common.sh@958 -- # kill -0 57914 00:22:18.826 06:48:51 rpc -- common/autotest_common.sh@959 -- # uname 00:22:18.826 06:48:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57914 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.827 killing process with pid 57914 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57914' 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@973 -- # kill 57914 00:22:18.827 06:48:51 rpc -- common/autotest_common.sh@978 -- # wait 57914 00:22:21.357 00:22:21.357 real 0m4.737s 00:22:21.357 user 0m5.599s 00:22:21.357 sys 0m0.697s 00:22:21.357 06:48:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.357 ************************************ 00:22:21.357 END TEST rpc 00:22:21.357 ************************************ 00:22:21.357 06:48:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.357 06:48:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:22:21.357 06:48:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.357 06:48:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.357 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:22:21.357 ************************************ 00:22:21.357 START TEST skip_rpc 00:22:21.357 ************************************ 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:22:21.357 * Looking for test storage... 00:22:21.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.357 06:48:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.357 --rc genhtml_branch_coverage=1 00:22:21.357 --rc genhtml_function_coverage=1 00:22:21.357 --rc genhtml_legend=1 00:22:21.357 --rc geninfo_all_blocks=1 00:22:21.357 --rc geninfo_unexecuted_blocks=1 00:22:21.357 00:22:21.357 ' 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.357 --rc genhtml_branch_coverage=1 00:22:21.357 --rc genhtml_function_coverage=1 00:22:21.357 --rc genhtml_legend=1 00:22:21.357 --rc geninfo_all_blocks=1 00:22:21.357 --rc geninfo_unexecuted_blocks=1 00:22:21.357 00:22:21.357 ' 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.357 --rc genhtml_branch_coverage=1 00:22:21.357 --rc genhtml_function_coverage=1 00:22:21.357 --rc genhtml_legend=1 00:22:21.357 --rc geninfo_all_blocks=1 00:22:21.357 --rc geninfo_unexecuted_blocks=1 00:22:21.357 00:22:21.357 ' 00:22:21.357 06:48:53 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.357 --rc genhtml_branch_coverage=1 00:22:21.357 --rc genhtml_function_coverage=1 00:22:21.357 --rc genhtml_legend=1 00:22:21.357 --rc geninfo_all_blocks=1 00:22:21.357 --rc geninfo_unexecuted_blocks=1 00:22:21.357 00:22:21.357 ' 00:22:21.357 06:48:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:21.357 06:48:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:21.357 06:48:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:22:21.358 06:48:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.358 06:48:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.358 06:48:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.358 ************************************ 00:22:21.358 START TEST skip_rpc 00:22:21.358 ************************************ 00:22:21.358 06:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:22:21.358 06:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58138 00:22:21.358 06:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:22:21.358 06:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:21.358 06:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:22:21.358 [2024-12-06 06:48:53.766103] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:21.358 [2024-12-06 06:48:53.766252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58138 ] 00:22:21.358 [2024-12-06 06:48:53.939528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.615 [2024-12-06 06:48:54.104244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58138 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58138 ']' 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58138 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.879 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58138 00:22:26.880 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.880 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.880 killing process with pid 58138 00:22:26.880 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58138' 00:22:26.880 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58138 00:22:26.880 06:48:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58138 00:22:28.291 00:22:28.291 real 0m7.124s 00:22:28.291 user 0m6.707s 00:22:28.291 sys 0m0.312s 00:22:28.291 06:49:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.291 ************************************ 00:22:28.291 END TEST skip_rpc 00:22:28.291 ************************************ 00:22:28.291 06:49:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 06:49:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:22:28.291 06:49:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.291 06:49:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.291 06:49:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 ************************************ 00:22:28.291 START TEST skip_rpc_with_json 00:22:28.291 ************************************ 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58242 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58242 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58242 ']' 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.291 06:49:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:28.569 [2024-12-06 06:49:00.921442] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:28.569 [2024-12-06 06:49:00.921606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58242 ] 00:22:28.569 [2024-12-06 06:49:01.126598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.827 [2024-12-06 06:49:01.237555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.762 06:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.762 06:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:22:29.762 06:49:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:22:29.762 06:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.762 06:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:29.762 [2024-12-06 06:49:02.000169] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:22:29.762 request: 00:22:29.762 { 00:22:29.762 "trtype": "tcp", 00:22:29.762 "method": "nvmf_get_transports", 00:22:29.762 "req_id": 1 00:22:29.762 } 00:22:29.762 Got JSON-RPC error response 00:22:29.762 response: 00:22:29.762 { 00:22:29.762 "code": -19, 00:22:29.762 "message": "No such device" 00:22:29.762 } 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:29.762 [2024-12-06 06:49:02.008324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.762 06:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:29.762 { 00:22:29.762 "subsystems": [ 00:22:29.762 { 00:22:29.762 "subsystem": "fsdev", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "fsdev_set_opts", 00:22:29.762 "params": { 00:22:29.762 "fsdev_io_pool_size": 65535, 00:22:29.762 "fsdev_io_cache_size": 256 00:22:29.762 } 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "keyring", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "iobuf", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "iobuf_set_options", 00:22:29.762 "params": { 00:22:29.762 "small_pool_count": 8192, 00:22:29.762 "large_pool_count": 1024, 00:22:29.762 "small_bufsize": 8192, 00:22:29.762 "large_bufsize": 135168, 00:22:29.762 "enable_numa": false 00:22:29.762 } 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "sock", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "sock_set_default_impl", 00:22:29.762 "params": { 00:22:29.762 "impl_name": "posix" 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "sock_impl_set_options", 00:22:29.762 "params": { 00:22:29.762 "impl_name": "ssl", 00:22:29.762 "recv_buf_size": 4096, 00:22:29.762 "send_buf_size": 4096, 00:22:29.762 "enable_recv_pipe": true, 00:22:29.762 "enable_quickack": false, 00:22:29.762 "enable_placement_id": 0, 00:22:29.762 "enable_zerocopy_send_server": true, 00:22:29.762 "enable_zerocopy_send_client": false, 00:22:29.762 "zerocopy_threshold": 0, 00:22:29.762 "tls_version": 0, 00:22:29.762 "enable_ktls": false 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "sock_impl_set_options", 00:22:29.762 "params": { 00:22:29.762 "impl_name": "posix", 00:22:29.762 "recv_buf_size": 2097152, 00:22:29.762 "send_buf_size": 2097152, 00:22:29.762 "enable_recv_pipe": true, 00:22:29.762 "enable_quickack": false, 00:22:29.762 "enable_placement_id": 0, 00:22:29.762 "enable_zerocopy_send_server": true, 00:22:29.762 "enable_zerocopy_send_client": false, 00:22:29.762 "zerocopy_threshold": 0, 00:22:29.762 "tls_version": 0, 00:22:29.762 "enable_ktls": false 00:22:29.762 } 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "vmd", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "accel", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "accel_set_options", 00:22:29.762 "params": { 00:22:29.762 "small_cache_size": 128, 00:22:29.762 "large_cache_size": 16, 00:22:29.762 "task_count": 2048, 00:22:29.762 "sequence_count": 2048, 00:22:29.762 "buf_count": 2048 00:22:29.762 } 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "bdev", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "bdev_set_options", 00:22:29.762 "params": { 00:22:29.762 "bdev_io_pool_size": 65535, 00:22:29.762 "bdev_io_cache_size": 256, 00:22:29.762 "bdev_auto_examine": true, 00:22:29.762 "iobuf_small_cache_size": 128, 00:22:29.762 "iobuf_large_cache_size": 16 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "bdev_raid_set_options", 00:22:29.762 "params": { 00:22:29.762 "process_window_size_kb": 1024, 00:22:29.762 "process_max_bandwidth_mb_sec": 0 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "bdev_iscsi_set_options", 00:22:29.762 "params": { 00:22:29.762 "timeout_sec": 30 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "bdev_nvme_set_options", 00:22:29.762 "params": { 00:22:29.762 "action_on_timeout": "none", 00:22:29.762 "timeout_us": 0, 00:22:29.762 "timeout_admin_us": 0, 00:22:29.762 "keep_alive_timeout_ms": 10000, 00:22:29.762 "arbitration_burst": 0, 00:22:29.762 "low_priority_weight": 0, 00:22:29.762 "medium_priority_weight": 0, 00:22:29.762 "high_priority_weight": 0, 00:22:29.762 "nvme_adminq_poll_period_us": 10000, 00:22:29.762 "nvme_ioq_poll_period_us": 0, 00:22:29.762 "io_queue_requests": 0, 00:22:29.762 "delay_cmd_submit": true, 00:22:29.762 "transport_retry_count": 4, 00:22:29.762 "bdev_retry_count": 3, 00:22:29.762 "transport_ack_timeout": 0, 00:22:29.762 "ctrlr_loss_timeout_sec": 0, 00:22:29.762 "reconnect_delay_sec": 0, 00:22:29.762 "fast_io_fail_timeout_sec": 0, 00:22:29.762 "disable_auto_failback": false, 00:22:29.762 "generate_uuids": false, 00:22:29.762 "transport_tos": 0, 00:22:29.762 "nvme_error_stat": false, 00:22:29.762 "rdma_srq_size": 0, 00:22:29.762 "io_path_stat": false, 00:22:29.762 "allow_accel_sequence": false, 00:22:29.762 "rdma_max_cq_size": 0, 00:22:29.762 "rdma_cm_event_timeout_ms": 0, 00:22:29.762 "dhchap_digests": [ 00:22:29.762 "sha256", 00:22:29.762 "sha384", 00:22:29.762 "sha512" 00:22:29.762 ], 00:22:29.762 "dhchap_dhgroups": [ 00:22:29.762 "null", 00:22:29.762 "ffdhe2048", 00:22:29.762 "ffdhe3072", 00:22:29.762 "ffdhe4096", 00:22:29.762 "ffdhe6144", 00:22:29.762 "ffdhe8192" 00:22:29.762 ] 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "bdev_nvme_set_hotplug", 00:22:29.762 "params": { 00:22:29.762 "period_us": 100000, 00:22:29.762 "enable": false 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "bdev_wait_for_examine" 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "scsi", 00:22:29.762 "config": null 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "scheduler", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "framework_set_scheduler", 00:22:29.762 "params": { 00:22:29.762 "name": "static" 00:22:29.762 } 00:22:29.762 } 00:22:29.762 ] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "vhost_scsi", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "vhost_blk", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "ublk", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "nbd", 00:22:29.762 "config": [] 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "subsystem": "nvmf", 00:22:29.762 "config": [ 00:22:29.762 { 00:22:29.762 "method": "nvmf_set_config", 00:22:29.762 "params": { 00:22:29.762 "discovery_filter": "match_any", 00:22:29.762 "admin_cmd_passthru": { 00:22:29.762 "identify_ctrlr": false 00:22:29.762 }, 00:22:29.762 "dhchap_digests": [ 00:22:29.762 "sha256", 00:22:29.762 "sha384", 00:22:29.762 "sha512" 00:22:29.762 ], 00:22:29.762 "dhchap_dhgroups": [ 00:22:29.762 "null", 00:22:29.762 "ffdhe2048", 00:22:29.762 "ffdhe3072", 00:22:29.762 "ffdhe4096", 00:22:29.762 "ffdhe6144", 00:22:29.762 "ffdhe8192" 00:22:29.762 ] 00:22:29.762 } 00:22:29.762 }, 00:22:29.762 { 00:22:29.762 "method": "nvmf_set_max_subsystems", 00:22:29.763 "params": { 00:22:29.763 "max_subsystems": 1024 00:22:29.763 } 00:22:29.763 }, 00:22:29.763 { 00:22:29.763 "method": "nvmf_set_crdt", 00:22:29.763 "params": { 00:22:29.763 "crdt1": 0, 00:22:29.763 "crdt2": 0, 00:22:29.763 "crdt3": 0 00:22:29.763 } 00:22:29.763 }, 00:22:29.763 { 00:22:29.763 "method": "nvmf_create_transport", 00:22:29.763 "params": { 00:22:29.763 "trtype": "TCP", 00:22:29.763 "max_queue_depth": 128, 00:22:29.763 "max_io_qpairs_per_ctrlr": 127, 00:22:29.763 "in_capsule_data_size": 4096, 00:22:29.763 "max_io_size": 131072, 00:22:29.763 "io_unit_size": 131072, 00:22:29.763 "max_aq_depth": 128, 00:22:29.763 "num_shared_buffers": 511, 00:22:29.763 "buf_cache_size": 4294967295, 00:22:29.763 "dif_insert_or_strip": false, 00:22:29.763 "zcopy": false, 00:22:29.763 "c2h_success": true, 00:22:29.763 "sock_priority": 0, 00:22:29.763 "abort_timeout_sec": 1, 00:22:29.763 "ack_timeout": 0, 00:22:29.763 "data_wr_pool_size": 0 00:22:29.763 } 00:22:29.763 } 00:22:29.763 ] 00:22:29.763 }, 00:22:29.763 { 00:22:29.763 "subsystem": "iscsi", 00:22:29.763 "config": [ 00:22:29.763 { 00:22:29.763 "method": "iscsi_set_options", 00:22:29.763 "params": { 00:22:29.763 "node_base": "iqn.2016-06.io.spdk", 00:22:29.763 "max_sessions": 128, 00:22:29.763 "max_connections_per_session": 2, 00:22:29.763 "max_queue_depth": 64, 00:22:29.763 "default_time2wait": 2, 00:22:29.763 "default_time2retain": 20, 00:22:29.763 "first_burst_length": 8192, 00:22:29.763 "immediate_data": true, 00:22:29.763 "allow_duplicated_isid": false, 00:22:29.763 "error_recovery_level": 0, 00:22:29.763 "nop_timeout": 60, 00:22:29.763 "nop_in_interval": 30, 00:22:29.763 "disable_chap": false, 00:22:29.763 "require_chap": false, 00:22:29.763 "mutual_chap": false, 00:22:29.763 "chap_group": 0, 00:22:29.763 "max_large_datain_per_connection": 64, 00:22:29.763 "max_r2t_per_connection": 4, 00:22:29.763 "pdu_pool_size": 36864, 00:22:29.763 "immediate_data_pool_size": 16384, 00:22:29.763 "data_out_pool_size": 2048 00:22:29.763 } 00:22:29.763 } 00:22:29.763 ] 00:22:29.763 } 00:22:29.763 ] 00:22:29.763 } 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58242 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58242 ']' 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58242 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58242 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.763 killing process with pid 58242 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58242' 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58242 00:22:29.763 06:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58242 00:22:32.291 06:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58292 00:22:32.291 06:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:22:32.291 06:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58292 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58292 ']' 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58292 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58292 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58292' 00:22:37.552 killing process with pid 58292 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58292 00:22:37.552 06:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58292 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:38.943 00:22:38.943 real 0m10.605s 00:22:38.943 user 0m10.279s 00:22:38.943 sys 0m0.745s 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:38.943 ************************************ 00:22:38.943 END TEST skip_rpc_with_json 00:22:38.943 ************************************ 00:22:38.943 06:49:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:22:38.943 06:49:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:38.943 06:49:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.943 06:49:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:38.943 ************************************ 00:22:38.943 START TEST skip_rpc_with_delay 00:22:38.943 ************************************ 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:38.943 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:39.200 [2024-12-06 06:49:11.582095] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:39.200 00:22:39.200 real 0m0.187s 00:22:39.200 user 0m0.112s 00:22:39.200 sys 0m0.073s 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.200 06:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:22:39.200 ************************************ 00:22:39.200 END TEST skip_rpc_with_delay 00:22:39.200 ************************************ 00:22:39.200 06:49:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:22:39.200 06:49:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:22:39.200 06:49:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:22:39.200 06:49:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:39.200 06:49:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.200 06:49:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.200 ************************************ 00:22:39.200 START TEST exit_on_failed_rpc_init 00:22:39.200 ************************************ 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58426 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58426 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58426 ']' 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.200 06:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.457 [2024-12-06 06:49:11.791767] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:39.457 [2024-12-06 06:49:11.791938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58426 ] 00:22:39.457 [2024-12-06 06:49:11.961787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.714 [2024-12-06 06:49:12.063527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.279 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.279 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:40.280 06:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:40.537 [2024-12-06 06:49:12.968693] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:40.537 [2024-12-06 06:49:12.968861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:22:40.796 [2024-12-06 06:49:13.150602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.796 [2024-12-06 06:49:13.276010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.796 [2024-12-06 06:49:13.276187] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:40.796 [2024-12-06 06:49:13.276229] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:22:40.796 [2024-12-06 06:49:13.276287] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58426 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58426 ']' 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58426 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58426 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.054 killing process with pid 58426 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58426' 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58426 00:22:41.054 06:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58426 00:22:43.582 00:22:43.582 real 0m3.975s 00:22:43.582 user 0m4.625s 00:22:43.582 sys 0m0.500s 00:22:43.582 06:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.582 06:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.582 ************************************ 00:22:43.582 END TEST exit_on_failed_rpc_init 00:22:43.582 ************************************ 00:22:43.582 06:49:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:43.582 00:22:43.582 real 0m22.226s 00:22:43.582 user 0m21.875s 00:22:43.582 sys 0m1.814s 00:22:43.582 06:49:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.582 06:49:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:43.582 ************************************ 00:22:43.582 END TEST skip_rpc 00:22:43.582 ************************************ 00:22:43.582 06:49:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:43.582 06:49:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.582 06:49:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.582 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:22:43.582 ************************************ 00:22:43.582 START TEST rpc_client 00:22:43.582 ************************************ 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:43.582 * Looking for test storage... 00:22:43.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.582 06:49:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.582 --rc genhtml_branch_coverage=1 00:22:43.582 --rc genhtml_function_coverage=1 00:22:43.582 --rc genhtml_legend=1 00:22:43.582 --rc geninfo_all_blocks=1 00:22:43.582 --rc geninfo_unexecuted_blocks=1 00:22:43.582 00:22:43.582 ' 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.582 --rc genhtml_branch_coverage=1 00:22:43.582 --rc genhtml_function_coverage=1 00:22:43.582 --rc genhtml_legend=1 00:22:43.582 --rc geninfo_all_blocks=1 00:22:43.582 --rc geninfo_unexecuted_blocks=1 00:22:43.582 00:22:43.582 ' 00:22:43.582 06:49:15 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:15 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:22:43.583 OK 00:22:43.583 06:49:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:22:43.583 00:22:43.583 real 0m0.229s 00:22:43.583 user 0m0.142s 00:22:43.583 sys 0m0.098s 00:22:43.583 06:49:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.583 06:49:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:22:43.583 ************************************ 00:22:43.583 END TEST rpc_client 00:22:43.583 ************************************ 00:22:43.583 06:49:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:43.583 06:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.583 06:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.583 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:22:43.583 ************************************ 00:22:43.583 START TEST json_config 00:22:43.583 ************************************ 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.583 06:49:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.583 06:49:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.583 06:49:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.583 06:49:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.583 06:49:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.583 06:49:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:22:43.583 06:49:16 json_config -- scripts/common.sh@345 -- # : 1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.583 06:49:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.583 06:49:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@353 -- # local d=1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.583 06:49:16 json_config -- scripts/common.sh@355 -- # echo 1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.583 06:49:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@353 -- # local d=2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.583 06:49:16 json_config -- scripts/common.sh@355 -- # echo 2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.583 06:49:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.583 06:49:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.583 06:49:16 json_config -- scripts/common.sh@368 -- # return 0 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:16 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.583 --rc genhtml_branch_coverage=1 00:22:43.583 --rc genhtml_function_coverage=1 00:22:43.583 --rc genhtml_legend=1 00:22:43.583 --rc geninfo_all_blocks=1 00:22:43.583 --rc geninfo_unexecuted_blocks=1 00:22:43.583 00:22:43.583 ' 00:22:43.583 06:49:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.583 06:49:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7a10858d-5a4c-4885-924e-f934236c3390 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7a10858d-5a4c-4885-924e-f934236c3390 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.843 06:49:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.843 06:49:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.843 06:49:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.843 06:49:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.843 06:49:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.843 06:49:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.843 06:49:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.843 06:49:16 json_config -- paths/export.sh@5 -- # export PATH 00:22:43.843 06:49:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@51 -- # : 0 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.843 06:49:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:22:43.843 WARNING: No tests are enabled so not running JSON configuration tests 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:22:43.843 06:49:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:22:43.843 00:22:43.843 real 0m0.169s 00:22:43.843 user 0m0.115s 00:22:43.843 sys 0m0.058s 00:22:43.843 06:49:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.843 06:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:22:43.843 ************************************ 00:22:43.843 END TEST json_config 00:22:43.843 ************************************ 00:22:43.843 06:49:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:43.843 06:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.843 06:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.843 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:22:43.843 ************************************ 00:22:43.843 START TEST json_config_extra_key 00:22:43.843 ************************************ 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.843 06:49:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.843 --rc genhtml_branch_coverage=1 00:22:43.843 --rc genhtml_function_coverage=1 00:22:43.843 --rc genhtml_legend=1 00:22:43.843 --rc geninfo_all_blocks=1 00:22:43.843 --rc geninfo_unexecuted_blocks=1 00:22:43.843 00:22:43.843 ' 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.843 --rc genhtml_branch_coverage=1 00:22:43.843 --rc genhtml_function_coverage=1 00:22:43.843 --rc genhtml_legend=1 00:22:43.843 --rc geninfo_all_blocks=1 00:22:43.843 --rc geninfo_unexecuted_blocks=1 00:22:43.843 00:22:43.843 ' 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.843 --rc genhtml_branch_coverage=1 00:22:43.843 --rc genhtml_function_coverage=1 00:22:43.843 --rc genhtml_legend=1 00:22:43.843 --rc geninfo_all_blocks=1 00:22:43.843 --rc geninfo_unexecuted_blocks=1 00:22:43.843 00:22:43.843 ' 00:22:43.843 06:49:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.843 --rc genhtml_branch_coverage=1 00:22:43.843 --rc genhtml_function_coverage=1 00:22:43.843 --rc genhtml_legend=1 00:22:43.843 --rc geninfo_all_blocks=1 00:22:43.843 --rc geninfo_unexecuted_blocks=1 00:22:43.843 00:22:43.843 ' 00:22:43.843 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7a10858d-5a4c-4885-924e-f934236c3390 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7a10858d-5a4c-4885-924e-f934236c3390 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.843 06:49:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.844 06:49:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.844 06:49:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.844 06:49:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.844 06:49:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.844 06:49:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.844 06:49:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.844 06:49:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.844 06:49:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:22:43.844 06:49:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.844 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.844 06:49:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:22:43.844 INFO: launching applications... 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:22:43.844 06:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58644 00:22:43.844 Waiting for target to run... 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58644 /var/tmp/spdk_tgt.sock 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58644 ']' 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:22:43.844 06:49:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.844 06:49:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:44.102 [2024-12-06 06:49:16.522569] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:44.102 [2024-12-06 06:49:16.522734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58644 ] 00:22:44.360 [2024-12-06 06:49:16.857672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.619 [2024-12-06 06:49:16.973131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.185 06:49:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.185 06:49:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:22:45.185 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:22:45.185 INFO: shutting down applications... 00:22:45.185 06:49:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:22:45.185 06:49:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58644 ]] 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58644 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:45.185 06:49:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:45.752 06:49:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:45.752 06:49:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:45.752 06:49:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:45.752 06:49:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:46.319 06:49:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:46.319 06:49:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:46.319 06:49:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:46.319 06:49:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:46.578 06:49:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:46.578 06:49:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:46.578 06:49:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:46.578 06:49:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:47.146 06:49:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:47.146 06:49:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:47.146 06:49:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:47.146 06:49:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58644 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:22:47.713 SPDK target shutdown done 00:22:47.713 06:49:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:22:47.713 Success 00:22:47.713 06:49:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:22:47.713 00:22:47.713 real 0m3.933s 00:22:47.713 user 0m3.860s 00:22:47.713 sys 0m0.456s 00:22:47.713 06:49:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.713 06:49:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:47.713 ************************************ 00:22:47.713 END TEST json_config_extra_key 00:22:47.713 ************************************ 00:22:47.713 06:49:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:47.713 06:49:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:47.713 06:49:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.713 06:49:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.713 ************************************ 00:22:47.713 START TEST alias_rpc 00:22:47.713 ************************************ 00:22:47.713 06:49:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:47.713 * Looking for test storage... 00:22:47.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:22:47.713 06:49:20 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:47.713 06:49:20 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:22:47.713 06:49:20 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.971 06:49:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.971 --rc genhtml_branch_coverage=1 00:22:47.971 --rc genhtml_function_coverage=1 00:22:47.971 --rc genhtml_legend=1 00:22:47.971 --rc geninfo_all_blocks=1 00:22:47.971 --rc geninfo_unexecuted_blocks=1 00:22:47.971 00:22:47.971 ' 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.971 --rc genhtml_branch_coverage=1 00:22:47.971 --rc genhtml_function_coverage=1 00:22:47.971 --rc genhtml_legend=1 00:22:47.971 --rc geninfo_all_blocks=1 00:22:47.971 --rc geninfo_unexecuted_blocks=1 00:22:47.971 00:22:47.971 ' 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.971 --rc genhtml_branch_coverage=1 00:22:47.971 --rc genhtml_function_coverage=1 00:22:47.971 --rc genhtml_legend=1 00:22:47.971 --rc geninfo_all_blocks=1 00:22:47.971 --rc geninfo_unexecuted_blocks=1 00:22:47.971 00:22:47.971 ' 00:22:47.971 06:49:20 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.971 --rc genhtml_branch_coverage=1 00:22:47.971 --rc genhtml_function_coverage=1 00:22:47.972 --rc genhtml_legend=1 00:22:47.972 --rc geninfo_all_blocks=1 00:22:47.972 --rc geninfo_unexecuted_blocks=1 00:22:47.972 00:22:47.972 ' 00:22:47.972 06:49:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:22:47.972 06:49:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58749 00:22:47.972 06:49:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:47.972 06:49:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58749 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58749 ']' 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.972 06:49:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:47.972 [2024-12-06 06:49:20.528660] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:47.972 [2024-12-06 06:49:20.529038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:22:48.230 [2024-12-06 06:49:20.714116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.230 [2024-12-06 06:49:20.817231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.191 06:49:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.191 06:49:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:49.191 06:49:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:22:49.450 06:49:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58749 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58749 ']' 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58749 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58749 00:22:49.450 killing process with pid 58749 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58749' 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 58749 00:22:49.450 06:49:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 58749 00:22:51.981 ************************************ 00:22:51.981 END TEST alias_rpc 00:22:51.981 ************************************ 00:22:51.981 00:22:51.981 real 0m3.755s 00:22:51.981 user 0m3.984s 00:22:51.981 sys 0m0.468s 00:22:51.981 06:49:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.981 06:49:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:51.981 06:49:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:22:51.981 06:49:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:51.981 06:49:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:51.981 06:49:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.981 06:49:23 -- common/autotest_common.sh@10 -- # set +x 00:22:51.981 ************************************ 00:22:51.981 START TEST spdkcli_tcp 00:22:51.981 ************************************ 00:22:51.981 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:51.981 * Looking for test storage... 00:22:51.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.982 06:49:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:51.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.982 --rc genhtml_branch_coverage=1 00:22:51.982 --rc genhtml_function_coverage=1 00:22:51.982 --rc genhtml_legend=1 00:22:51.982 --rc geninfo_all_blocks=1 00:22:51.982 --rc geninfo_unexecuted_blocks=1 00:22:51.982 00:22:51.982 ' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:51.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.982 --rc genhtml_branch_coverage=1 00:22:51.982 --rc genhtml_function_coverage=1 00:22:51.982 --rc genhtml_legend=1 00:22:51.982 --rc geninfo_all_blocks=1 00:22:51.982 --rc geninfo_unexecuted_blocks=1 00:22:51.982 00:22:51.982 ' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:51.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.982 --rc genhtml_branch_coverage=1 00:22:51.982 --rc genhtml_function_coverage=1 00:22:51.982 --rc genhtml_legend=1 00:22:51.982 --rc geninfo_all_blocks=1 00:22:51.982 --rc geninfo_unexecuted_blocks=1 00:22:51.982 00:22:51.982 ' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:51.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.982 --rc genhtml_branch_coverage=1 00:22:51.982 --rc genhtml_function_coverage=1 00:22:51.982 --rc genhtml_legend=1 00:22:51.982 --rc geninfo_all_blocks=1 00:22:51.982 --rc geninfo_unexecuted_blocks=1 00:22:51.982 00:22:51.982 ' 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58856 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58856 00:22:51.982 06:49:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58856 ']' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.982 06:49:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.982 [2024-12-06 06:49:24.314831] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:51.982 [2024-12-06 06:49:24.314999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58856 ] 00:22:51.982 [2024-12-06 06:49:24.495372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:52.239 [2024-12-06 06:49:24.600595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.239 [2024-12-06 06:49:24.600610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.806 06:49:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.806 06:49:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:22:52.806 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:22:52.806 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58873 00:22:52.806 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:22:53.370 [ 00:22:53.370 "bdev_malloc_delete", 00:22:53.370 "bdev_malloc_create", 00:22:53.370 "bdev_null_resize", 00:22:53.370 "bdev_null_delete", 00:22:53.370 "bdev_null_create", 00:22:53.370 "bdev_nvme_cuse_unregister", 00:22:53.370 "bdev_nvme_cuse_register", 00:22:53.370 "bdev_opal_new_user", 00:22:53.370 "bdev_opal_set_lock_state", 00:22:53.370 "bdev_opal_delete", 00:22:53.370 "bdev_opal_get_info", 00:22:53.370 "bdev_opal_create", 00:22:53.370 "bdev_nvme_opal_revert", 00:22:53.370 "bdev_nvme_opal_init", 00:22:53.370 "bdev_nvme_send_cmd", 00:22:53.370 "bdev_nvme_set_keys", 00:22:53.370 "bdev_nvme_get_path_iostat", 00:22:53.370 "bdev_nvme_get_mdns_discovery_info", 00:22:53.370 "bdev_nvme_stop_mdns_discovery", 00:22:53.370 "bdev_nvme_start_mdns_discovery", 00:22:53.370 "bdev_nvme_set_multipath_policy", 00:22:53.370 "bdev_nvme_set_preferred_path", 00:22:53.370 "bdev_nvme_get_io_paths", 00:22:53.370 "bdev_nvme_remove_error_injection", 00:22:53.370 "bdev_nvme_add_error_injection", 00:22:53.370 "bdev_nvme_get_discovery_info", 00:22:53.370 "bdev_nvme_stop_discovery", 00:22:53.370 "bdev_nvme_start_discovery", 00:22:53.370 "bdev_nvme_get_controller_health_info", 00:22:53.370 "bdev_nvme_disable_controller", 00:22:53.370 "bdev_nvme_enable_controller", 00:22:53.370 "bdev_nvme_reset_controller", 00:22:53.370 "bdev_nvme_get_transport_statistics", 00:22:53.370 "bdev_nvme_apply_firmware", 00:22:53.370 "bdev_nvme_detach_controller", 00:22:53.370 "bdev_nvme_get_controllers", 00:22:53.370 "bdev_nvme_attach_controller", 00:22:53.370 "bdev_nvme_set_hotplug", 00:22:53.370 "bdev_nvme_set_options", 00:22:53.370 "bdev_passthru_delete", 00:22:53.370 "bdev_passthru_create", 00:22:53.370 "bdev_lvol_set_parent_bdev", 00:22:53.370 "bdev_lvol_set_parent", 00:22:53.370 "bdev_lvol_check_shallow_copy", 00:22:53.370 "bdev_lvol_start_shallow_copy", 00:22:53.370 "bdev_lvol_grow_lvstore", 00:22:53.370 "bdev_lvol_get_lvols", 00:22:53.370 "bdev_lvol_get_lvstores", 00:22:53.370 "bdev_lvol_delete", 00:22:53.370 "bdev_lvol_set_read_only", 00:22:53.370 "bdev_lvol_resize", 00:22:53.370 "bdev_lvol_decouple_parent", 00:22:53.370 "bdev_lvol_inflate", 00:22:53.370 "bdev_lvol_rename", 00:22:53.370 "bdev_lvol_clone_bdev", 00:22:53.370 "bdev_lvol_clone", 00:22:53.370 "bdev_lvol_snapshot", 00:22:53.370 "bdev_lvol_create", 00:22:53.370 "bdev_lvol_delete_lvstore", 00:22:53.370 "bdev_lvol_rename_lvstore", 00:22:53.370 "bdev_lvol_create_lvstore", 00:22:53.370 "bdev_raid_set_options", 00:22:53.370 "bdev_raid_remove_base_bdev", 00:22:53.370 "bdev_raid_add_base_bdev", 00:22:53.370 "bdev_raid_delete", 00:22:53.370 "bdev_raid_create", 00:22:53.370 "bdev_raid_get_bdevs", 00:22:53.370 "bdev_error_inject_error", 00:22:53.370 "bdev_error_delete", 00:22:53.370 "bdev_error_create", 00:22:53.370 "bdev_split_delete", 00:22:53.370 "bdev_split_create", 00:22:53.370 "bdev_delay_delete", 00:22:53.370 "bdev_delay_create", 00:22:53.370 "bdev_delay_update_latency", 00:22:53.370 "bdev_zone_block_delete", 00:22:53.370 "bdev_zone_block_create", 00:22:53.370 "blobfs_create", 00:22:53.370 "blobfs_detect", 00:22:53.370 "blobfs_set_cache_size", 00:22:53.370 "bdev_xnvme_delete", 00:22:53.370 "bdev_xnvme_create", 00:22:53.370 "bdev_aio_delete", 00:22:53.370 "bdev_aio_rescan", 00:22:53.370 "bdev_aio_create", 00:22:53.370 "bdev_ftl_set_property", 00:22:53.370 "bdev_ftl_get_properties", 00:22:53.370 "bdev_ftl_get_stats", 00:22:53.370 "bdev_ftl_unmap", 00:22:53.370 "bdev_ftl_unload", 00:22:53.370 "bdev_ftl_delete", 00:22:53.370 "bdev_ftl_load", 00:22:53.370 "bdev_ftl_create", 00:22:53.370 "bdev_virtio_attach_controller", 00:22:53.370 "bdev_virtio_scsi_get_devices", 00:22:53.370 "bdev_virtio_detach_controller", 00:22:53.370 "bdev_virtio_blk_set_hotplug", 00:22:53.370 "bdev_iscsi_delete", 00:22:53.370 "bdev_iscsi_create", 00:22:53.370 "bdev_iscsi_set_options", 00:22:53.370 "accel_error_inject_error", 00:22:53.370 "ioat_scan_accel_module", 00:22:53.370 "dsa_scan_accel_module", 00:22:53.370 "iaa_scan_accel_module", 00:22:53.370 "keyring_file_remove_key", 00:22:53.370 "keyring_file_add_key", 00:22:53.370 "keyring_linux_set_options", 00:22:53.370 "fsdev_aio_delete", 00:22:53.370 "fsdev_aio_create", 00:22:53.370 "iscsi_get_histogram", 00:22:53.370 "iscsi_enable_histogram", 00:22:53.370 "iscsi_set_options", 00:22:53.370 "iscsi_get_auth_groups", 00:22:53.370 "iscsi_auth_group_remove_secret", 00:22:53.370 "iscsi_auth_group_add_secret", 00:22:53.370 "iscsi_delete_auth_group", 00:22:53.370 "iscsi_create_auth_group", 00:22:53.370 "iscsi_set_discovery_auth", 00:22:53.370 "iscsi_get_options", 00:22:53.370 "iscsi_target_node_request_logout", 00:22:53.370 "iscsi_target_node_set_redirect", 00:22:53.370 "iscsi_target_node_set_auth", 00:22:53.370 "iscsi_target_node_add_lun", 00:22:53.370 "iscsi_get_stats", 00:22:53.370 "iscsi_get_connections", 00:22:53.370 "iscsi_portal_group_set_auth", 00:22:53.370 "iscsi_start_portal_group", 00:22:53.370 "iscsi_delete_portal_group", 00:22:53.370 "iscsi_create_portal_group", 00:22:53.370 "iscsi_get_portal_groups", 00:22:53.370 "iscsi_delete_target_node", 00:22:53.370 "iscsi_target_node_remove_pg_ig_maps", 00:22:53.370 "iscsi_target_node_add_pg_ig_maps", 00:22:53.370 "iscsi_create_target_node", 00:22:53.370 "iscsi_get_target_nodes", 00:22:53.370 "iscsi_delete_initiator_group", 00:22:53.370 "iscsi_initiator_group_remove_initiators", 00:22:53.370 "iscsi_initiator_group_add_initiators", 00:22:53.370 "iscsi_create_initiator_group", 00:22:53.370 "iscsi_get_initiator_groups", 00:22:53.370 "nvmf_set_crdt", 00:22:53.370 "nvmf_set_config", 00:22:53.370 "nvmf_set_max_subsystems", 00:22:53.370 "nvmf_stop_mdns_prr", 00:22:53.370 "nvmf_publish_mdns_prr", 00:22:53.370 "nvmf_subsystem_get_listeners", 00:22:53.370 "nvmf_subsystem_get_qpairs", 00:22:53.370 "nvmf_subsystem_get_controllers", 00:22:53.370 "nvmf_get_stats", 00:22:53.370 "nvmf_get_transports", 00:22:53.370 "nvmf_create_transport", 00:22:53.370 "nvmf_get_targets", 00:22:53.370 "nvmf_delete_target", 00:22:53.370 "nvmf_create_target", 00:22:53.370 "nvmf_subsystem_allow_any_host", 00:22:53.370 "nvmf_subsystem_set_keys", 00:22:53.370 "nvmf_subsystem_remove_host", 00:22:53.370 "nvmf_subsystem_add_host", 00:22:53.370 "nvmf_ns_remove_host", 00:22:53.370 "nvmf_ns_add_host", 00:22:53.370 "nvmf_subsystem_remove_ns", 00:22:53.370 "nvmf_subsystem_set_ns_ana_group", 00:22:53.370 "nvmf_subsystem_add_ns", 00:22:53.370 "nvmf_subsystem_listener_set_ana_state", 00:22:53.370 "nvmf_discovery_get_referrals", 00:22:53.370 "nvmf_discovery_remove_referral", 00:22:53.370 "nvmf_discovery_add_referral", 00:22:53.370 "nvmf_subsystem_remove_listener", 00:22:53.370 "nvmf_subsystem_add_listener", 00:22:53.370 "nvmf_delete_subsystem", 00:22:53.370 "nvmf_create_subsystem", 00:22:53.370 "nvmf_get_subsystems", 00:22:53.370 "env_dpdk_get_mem_stats", 00:22:53.370 "nbd_get_disks", 00:22:53.370 "nbd_stop_disk", 00:22:53.370 "nbd_start_disk", 00:22:53.370 "ublk_recover_disk", 00:22:53.370 "ublk_get_disks", 00:22:53.370 "ublk_stop_disk", 00:22:53.370 "ublk_start_disk", 00:22:53.370 "ublk_destroy_target", 00:22:53.370 "ublk_create_target", 00:22:53.370 "virtio_blk_create_transport", 00:22:53.370 "virtio_blk_get_transports", 00:22:53.370 "vhost_controller_set_coalescing", 00:22:53.370 "vhost_get_controllers", 00:22:53.370 "vhost_delete_controller", 00:22:53.370 "vhost_create_blk_controller", 00:22:53.370 "vhost_scsi_controller_remove_target", 00:22:53.370 "vhost_scsi_controller_add_target", 00:22:53.370 "vhost_start_scsi_controller", 00:22:53.370 "vhost_create_scsi_controller", 00:22:53.370 "thread_set_cpumask", 00:22:53.370 "scheduler_set_options", 00:22:53.370 "framework_get_governor", 00:22:53.370 "framework_get_scheduler", 00:22:53.370 "framework_set_scheduler", 00:22:53.370 "framework_get_reactors", 00:22:53.370 "thread_get_io_channels", 00:22:53.370 "thread_get_pollers", 00:22:53.370 "thread_get_stats", 00:22:53.370 "framework_monitor_context_switch", 00:22:53.370 "spdk_kill_instance", 00:22:53.370 "log_enable_timestamps", 00:22:53.370 "log_get_flags", 00:22:53.370 "log_clear_flag", 00:22:53.370 "log_set_flag", 00:22:53.370 "log_get_level", 00:22:53.370 "log_set_level", 00:22:53.370 "log_get_print_level", 00:22:53.370 "log_set_print_level", 00:22:53.370 "framework_enable_cpumask_locks", 00:22:53.370 "framework_disable_cpumask_locks", 00:22:53.370 "framework_wait_init", 00:22:53.370 "framework_start_init", 00:22:53.370 "scsi_get_devices", 00:22:53.371 "bdev_get_histogram", 00:22:53.371 "bdev_enable_histogram", 00:22:53.371 "bdev_set_qos_limit", 00:22:53.371 "bdev_set_qd_sampling_period", 00:22:53.371 "bdev_get_bdevs", 00:22:53.371 "bdev_reset_iostat", 00:22:53.371 "bdev_get_iostat", 00:22:53.371 "bdev_examine", 00:22:53.371 "bdev_wait_for_examine", 00:22:53.371 "bdev_set_options", 00:22:53.371 "accel_get_stats", 00:22:53.371 "accel_set_options", 00:22:53.371 "accel_set_driver", 00:22:53.371 "accel_crypto_key_destroy", 00:22:53.371 "accel_crypto_keys_get", 00:22:53.371 "accel_crypto_key_create", 00:22:53.371 "accel_assign_opc", 00:22:53.371 "accel_get_module_info", 00:22:53.371 "accel_get_opc_assignments", 00:22:53.371 "vmd_rescan", 00:22:53.371 "vmd_remove_device", 00:22:53.371 "vmd_enable", 00:22:53.371 "sock_get_default_impl", 00:22:53.371 "sock_set_default_impl", 00:22:53.371 "sock_impl_set_options", 00:22:53.371 "sock_impl_get_options", 00:22:53.371 "iobuf_get_stats", 00:22:53.371 "iobuf_set_options", 00:22:53.371 "keyring_get_keys", 00:22:53.371 "framework_get_pci_devices", 00:22:53.371 "framework_get_config", 00:22:53.371 "framework_get_subsystems", 00:22:53.371 "fsdev_set_opts", 00:22:53.371 "fsdev_get_opts", 00:22:53.371 "trace_get_info", 00:22:53.371 "trace_get_tpoint_group_mask", 00:22:53.371 "trace_disable_tpoint_group", 00:22:53.371 "trace_enable_tpoint_group", 00:22:53.371 "trace_clear_tpoint_mask", 00:22:53.371 "trace_set_tpoint_mask", 00:22:53.371 "notify_get_notifications", 00:22:53.371 "notify_get_types", 00:22:53.371 "spdk_get_version", 00:22:53.371 "rpc_get_methods" 00:22:53.371 ] 00:22:53.371 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.371 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:53.371 06:49:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58856 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58856 ']' 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58856 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58856 00:22:53.371 killing process with pid 58856 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58856' 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58856 00:22:53.371 06:49:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58856 00:22:55.896 ************************************ 00:22:55.896 END TEST spdkcli_tcp 00:22:55.896 ************************************ 00:22:55.896 00:22:55.896 real 0m3.934s 00:22:55.896 user 0m7.365s 00:22:55.896 sys 0m0.520s 00:22:55.896 06:49:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.896 06:49:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 06:49:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:55.896 06:49:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:55.896 06:49:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.896 06:49:27 -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 ************************************ 00:22:55.896 START TEST dpdk_mem_utility 00:22:55.897 ************************************ 00:22:55.897 06:49:27 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:55.897 * Looking for test storage... 00:22:55.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:22:55.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.897 06:49:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:55.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.897 --rc genhtml_branch_coverage=1 00:22:55.897 --rc genhtml_function_coverage=1 00:22:55.897 --rc genhtml_legend=1 00:22:55.897 --rc geninfo_all_blocks=1 00:22:55.897 --rc geninfo_unexecuted_blocks=1 00:22:55.897 00:22:55.897 ' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:55.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.897 --rc genhtml_branch_coverage=1 00:22:55.897 --rc genhtml_function_coverage=1 00:22:55.897 --rc genhtml_legend=1 00:22:55.897 --rc geninfo_all_blocks=1 00:22:55.897 --rc geninfo_unexecuted_blocks=1 00:22:55.897 00:22:55.897 ' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:55.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.897 --rc genhtml_branch_coverage=1 00:22:55.897 --rc genhtml_function_coverage=1 00:22:55.897 --rc genhtml_legend=1 00:22:55.897 --rc geninfo_all_blocks=1 00:22:55.897 --rc geninfo_unexecuted_blocks=1 00:22:55.897 00:22:55.897 ' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:55.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.897 --rc genhtml_branch_coverage=1 00:22:55.897 --rc genhtml_function_coverage=1 00:22:55.897 --rc genhtml_legend=1 00:22:55.897 --rc geninfo_all_blocks=1 00:22:55.897 --rc geninfo_unexecuted_blocks=1 00:22:55.897 00:22:55.897 ' 00:22:55.897 06:49:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:55.897 06:49:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58977 00:22:55.897 06:49:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.897 06:49:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58977 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.897 06:49:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:55.897 [2024-12-06 06:49:28.262869] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:55.897 [2024-12-06 06:49:28.263231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:22:55.897 [2024-12-06 06:49:28.442521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.156 [2024-12-06 06:49:28.566959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.772 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.772 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:22:56.772 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:22:56.772 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:22:56.772 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.772 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:56.772 { 00:22:56.772 "filename": "/tmp/spdk_mem_dump.txt" 00:22:56.772 } 00:22:56.772 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.772 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:57.033 DPDK memory size 824.000000 MiB in 1 heap(s) 00:22:57.033 1 heaps totaling size 824.000000 MiB 00:22:57.033 size: 824.000000 MiB heap id: 0 00:22:57.033 end heaps---------- 00:22:57.033 9 mempools totaling size 603.782043 MiB 00:22:57.033 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:22:57.033 size: 158.602051 MiB name: PDU_data_out_Pool 00:22:57.033 size: 100.555481 MiB name: bdev_io_58977 00:22:57.033 size: 50.003479 MiB name: msgpool_58977 00:22:57.033 size: 36.509338 MiB name: fsdev_io_58977 00:22:57.033 size: 21.763794 MiB name: PDU_Pool 00:22:57.033 size: 19.513306 MiB name: SCSI_TASK_Pool 00:22:57.033 size: 4.133484 MiB name: evtpool_58977 00:22:57.033 size: 0.026123 MiB name: Session_Pool 00:22:57.033 end mempools------- 00:22:57.033 6 memzones totaling size 4.142822 MiB 00:22:57.033 size: 1.000366 MiB name: RG_ring_0_58977 00:22:57.033 size: 1.000366 MiB name: RG_ring_1_58977 00:22:57.033 size: 1.000366 MiB name: RG_ring_4_58977 00:22:57.033 size: 1.000366 MiB name: RG_ring_5_58977 00:22:57.033 size: 0.125366 MiB name: RG_ring_2_58977 00:22:57.033 size: 0.015991 MiB name: RG_ring_3_58977 00:22:57.033 end memzones------- 00:22:57.033 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:22:57.033 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:22:57.033 list of free elements. size: 16.781128 MiB 00:22:57.033 element at address: 0x200006400000 with size: 1.995972 MiB 00:22:57.033 element at address: 0x20000a600000 with size: 1.995972 MiB 00:22:57.033 element at address: 0x200003e00000 with size: 1.991028 MiB 00:22:57.033 element at address: 0x200019500040 with size: 0.999939 MiB 00:22:57.033 element at address: 0x200019900040 with size: 0.999939 MiB 00:22:57.033 element at address: 0x200019a00000 with size: 0.999084 MiB 00:22:57.033 element at address: 0x200032600000 with size: 0.994324 MiB 00:22:57.033 element at address: 0x200000400000 with size: 0.992004 MiB 00:22:57.033 element at address: 0x200019200000 with size: 0.959656 MiB 00:22:57.033 element at address: 0x200019d00040 with size: 0.936401 MiB 00:22:57.033 element at address: 0x200000200000 with size: 0.716980 MiB 00:22:57.033 element at address: 0x20001b400000 with size: 0.562439 MiB 00:22:57.033 element at address: 0x200000c00000 with size: 0.489197 MiB 00:22:57.033 element at address: 0x200019600000 with size: 0.487976 MiB 00:22:57.033 element at address: 0x200019e00000 with size: 0.485413 MiB 00:22:57.033 element at address: 0x200012c00000 with size: 0.433472 MiB 00:22:57.033 element at address: 0x200028800000 with size: 0.390442 MiB 00:22:57.033 element at address: 0x200000800000 with size: 0.350891 MiB 00:22:57.033 list of standard malloc elements. size: 199.287964 MiB 00:22:57.033 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:22:57.033 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:22:57.033 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:22:57.033 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:22:57.033 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:22:57.033 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:22:57.033 element at address: 0x200019deff40 with size: 0.062683 MiB 00:22:57.033 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:22:57.033 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:22:57.033 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:22:57.033 element at address: 0x200012bff040 with size: 0.000305 MiB 00:22:57.033 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:22:57.033 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:22:57.033 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200000cff000 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff180 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff280 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff380 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff480 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff580 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff680 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff780 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff880 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bff980 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200019affc40 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:22:57.034 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:22:57.035 element at address: 0x200028863f40 with size: 0.000244 MiB 00:22:57.035 element at address: 0x200028864040 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886af80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b080 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b180 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b280 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b380 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b480 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b580 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b680 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b780 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b880 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886b980 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886be80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c080 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c180 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c280 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c380 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c480 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c580 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c680 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c780 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c880 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886c980 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d080 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d180 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d280 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d380 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d480 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d580 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d680 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d780 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d880 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886d980 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886da80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886db80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886de80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886df80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e080 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e180 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e280 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e380 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e480 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e580 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e680 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e780 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e880 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886e980 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f080 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f180 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f280 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f380 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f480 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f580 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f680 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f780 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f880 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886f980 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:22:57.035 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:22:57.035 list of memzone associated elements. size: 607.930908 MiB 00:22:57.035 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:22:57.035 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:22:57.035 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:22:57.035 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:22:57.035 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:22:57.035 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58977_0 00:22:57.035 element at address: 0x200000dff340 with size: 48.003113 MiB 00:22:57.035 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58977_0 00:22:57.035 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:22:57.035 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58977_0 00:22:57.035 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:22:57.035 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:22:57.035 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:22:57.035 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:22:57.035 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:22:57.035 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58977_0 00:22:57.035 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:22:57.035 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58977 00:22:57.035 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:22:57.035 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58977 00:22:57.035 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:22:57.035 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:22:57.035 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:22:57.035 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:22:57.035 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:22:57.035 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:22:57.035 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:22:57.035 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:22:57.035 element at address: 0x200000cff100 with size: 1.000549 MiB 00:22:57.035 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58977 00:22:57.035 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:22:57.035 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58977 00:22:57.035 element at address: 0x200019affd40 with size: 1.000549 MiB 00:22:57.035 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58977 00:22:57.035 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:22:57.035 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58977 00:22:57.035 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:22:57.035 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58977 00:22:57.035 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:22:57.035 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58977 00:22:57.035 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:22:57.035 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:22:57.035 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:22:57.035 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:22:57.035 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:22:57.035 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:22:57.035 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:22:57.035 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58977 00:22:57.035 element at address: 0x20000085df80 with size: 0.125549 MiB 00:22:57.035 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58977 00:22:57.035 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:22:57.035 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:22:57.035 element at address: 0x200028864140 with size: 0.023804 MiB 00:22:57.035 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:22:57.035 element at address: 0x200000859d40 with size: 0.016174 MiB 00:22:57.035 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58977 00:22:57.035 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:22:57.035 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:22:57.035 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:22:57.035 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58977 00:22:57.035 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:22:57.035 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58977 00:22:57.035 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:22:57.036 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58977 00:22:57.036 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:22:57.036 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:22:57.036 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:22:57.036 06:49:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58977 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58977 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:22:57.036 killing process with pid 58977 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58977 00:22:57.036 06:49:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58977 00:22:59.563 00:22:59.563 real 0m3.626s 00:22:59.563 user 0m3.770s 00:22:59.563 sys 0m0.478s 00:22:59.563 06:49:31 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.563 ************************************ 00:22:59.563 END TEST dpdk_mem_utility 00:22:59.563 ************************************ 00:22:59.563 06:49:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:59.563 06:49:31 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:59.563 06:49:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:59.563 06:49:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.563 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:22:59.563 ************************************ 00:22:59.563 START TEST event 00:22:59.563 ************************************ 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:59.563 * Looking for test storage... 00:22:59.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1711 -- # lcov --version 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:59.563 06:49:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.563 06:49:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.563 06:49:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.563 06:49:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.563 06:49:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.563 06:49:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.563 06:49:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.563 06:49:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.563 06:49:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.563 06:49:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.563 06:49:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.563 06:49:31 event -- scripts/common.sh@344 -- # case "$op" in 00:22:59.563 06:49:31 event -- scripts/common.sh@345 -- # : 1 00:22:59.563 06:49:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.563 06:49:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.563 06:49:31 event -- scripts/common.sh@365 -- # decimal 1 00:22:59.563 06:49:31 event -- scripts/common.sh@353 -- # local d=1 00:22:59.563 06:49:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.563 06:49:31 event -- scripts/common.sh@355 -- # echo 1 00:22:59.563 06:49:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.563 06:49:31 event -- scripts/common.sh@366 -- # decimal 2 00:22:59.563 06:49:31 event -- scripts/common.sh@353 -- # local d=2 00:22:59.563 06:49:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.563 06:49:31 event -- scripts/common.sh@355 -- # echo 2 00:22:59.563 06:49:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.563 06:49:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.563 06:49:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.563 06:49:31 event -- scripts/common.sh@368 -- # return 0 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.563 --rc genhtml_branch_coverage=1 00:22:59.563 --rc genhtml_function_coverage=1 00:22:59.563 --rc genhtml_legend=1 00:22:59.563 --rc geninfo_all_blocks=1 00:22:59.563 --rc geninfo_unexecuted_blocks=1 00:22:59.563 00:22:59.563 ' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.563 --rc genhtml_branch_coverage=1 00:22:59.563 --rc genhtml_function_coverage=1 00:22:59.563 --rc genhtml_legend=1 00:22:59.563 --rc geninfo_all_blocks=1 00:22:59.563 --rc geninfo_unexecuted_blocks=1 00:22:59.563 00:22:59.563 ' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.563 --rc genhtml_branch_coverage=1 00:22:59.563 --rc genhtml_function_coverage=1 00:22:59.563 --rc genhtml_legend=1 00:22:59.563 --rc geninfo_all_blocks=1 00:22:59.563 --rc geninfo_unexecuted_blocks=1 00:22:59.563 00:22:59.563 ' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.563 --rc genhtml_branch_coverage=1 00:22:59.563 --rc genhtml_function_coverage=1 00:22:59.563 --rc genhtml_legend=1 00:22:59.563 --rc geninfo_all_blocks=1 00:22:59.563 --rc geninfo_unexecuted_blocks=1 00:22:59.563 00:22:59.563 ' 00:22:59.563 06:49:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:59.563 06:49:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:22:59.563 06:49:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:22:59.563 06:49:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.563 06:49:31 event -- common/autotest_common.sh@10 -- # set +x 00:22:59.563 ************************************ 00:22:59.563 START TEST event_perf 00:22:59.563 ************************************ 00:22:59.563 06:49:31 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:59.563 Running I/O for 1 seconds...[2024-12-06 06:49:31.880976] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:22:59.563 [2024-12-06 06:49:31.881330] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59075 ] 00:22:59.563 [2024-12-06 06:49:32.074700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.823 [2024-12-06 06:49:32.241530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.823 [2024-12-06 06:49:32.241666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.823 [2024-12-06 06:49:32.241774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.823 [2024-12-06 06:49:32.241796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.197 Running I/O for 1 seconds... 00:23:01.197 lcore 0: 183805 00:23:01.197 lcore 1: 183803 00:23:01.197 lcore 2: 183803 00:23:01.197 lcore 3: 183803 00:23:01.197 done. 00:23:01.197 00:23:01.197 real 0m1.647s 00:23:01.197 user 0m4.380s 00:23:01.197 sys 0m0.124s 00:23:01.197 06:49:33 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.197 06:49:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:23:01.197 ************************************ 00:23:01.197 END TEST event_perf 00:23:01.197 ************************************ 00:23:01.197 06:49:33 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:01.197 06:49:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:01.197 06:49:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.197 06:49:33 event -- common/autotest_common.sh@10 -- # set +x 00:23:01.197 ************************************ 00:23:01.197 START TEST event_reactor 00:23:01.197 ************************************ 00:23:01.197 06:49:33 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:01.197 [2024-12-06 06:49:33.575806] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:01.197 [2024-12-06 06:49:33.575964] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:23:01.197 [2024-12-06 06:49:33.750124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.456 [2024-12-06 06:49:33.852097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.885 test_start 00:23:02.885 oneshot 00:23:02.885 tick 100 00:23:02.885 tick 100 00:23:02.885 tick 250 00:23:02.885 tick 100 00:23:02.885 tick 100 00:23:02.885 tick 100 00:23:02.885 tick 250 00:23:02.885 tick 500 00:23:02.885 tick 100 00:23:02.885 tick 100 00:23:02.885 tick 250 00:23:02.885 tick 100 00:23:02.885 tick 100 00:23:02.885 test_end 00:23:02.885 00:23:02.885 real 0m1.538s 00:23:02.885 user 0m1.358s 00:23:02.885 sys 0m0.071s 00:23:02.885 ************************************ 00:23:02.885 END TEST event_reactor 00:23:02.885 ************************************ 00:23:02.885 06:49:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.885 06:49:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:23:02.885 06:49:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:02.885 06:49:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:02.885 06:49:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.885 06:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:23:02.885 ************************************ 00:23:02.885 START TEST event_reactor_perf 00:23:02.885 ************************************ 00:23:02.885 06:49:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:02.885 [2024-12-06 06:49:35.166315] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:02.885 [2024-12-06 06:49:35.166662] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59157 ] 00:23:02.885 [2024-12-06 06:49:35.347766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.143 [2024-12-06 06:49:35.458243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.519 test_start 00:23:04.519 test_end 00:23:04.519 Performance: 267105 events per second 00:23:04.519 00:23:04.519 real 0m1.556s 00:23:04.519 user 0m1.359s 00:23:04.519 sys 0m0.087s 00:23:04.520 06:49:36 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.520 ************************************ 00:23:04.520 END TEST event_reactor_perf 00:23:04.520 ************************************ 00:23:04.520 06:49:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:23:04.520 06:49:36 event -- event/event.sh@49 -- # uname -s 00:23:04.520 06:49:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:23:04.520 06:49:36 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:04.520 06:49:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:04.520 06:49:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.520 06:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:23:04.520 ************************************ 00:23:04.520 START TEST event_scheduler 00:23:04.520 ************************************ 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:04.520 * Looking for test storage... 00:23:04.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.520 06:49:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.520 --rc genhtml_branch_coverage=1 00:23:04.520 --rc genhtml_function_coverage=1 00:23:04.520 --rc genhtml_legend=1 00:23:04.520 --rc geninfo_all_blocks=1 00:23:04.520 --rc geninfo_unexecuted_blocks=1 00:23:04.520 00:23:04.520 ' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.520 --rc genhtml_branch_coverage=1 00:23:04.520 --rc genhtml_function_coverage=1 00:23:04.520 --rc genhtml_legend=1 00:23:04.520 --rc geninfo_all_blocks=1 00:23:04.520 --rc geninfo_unexecuted_blocks=1 00:23:04.520 00:23:04.520 ' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.520 --rc genhtml_branch_coverage=1 00:23:04.520 --rc genhtml_function_coverage=1 00:23:04.520 --rc genhtml_legend=1 00:23:04.520 --rc geninfo_all_blocks=1 00:23:04.520 --rc geninfo_unexecuted_blocks=1 00:23:04.520 00:23:04.520 ' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.520 --rc genhtml_branch_coverage=1 00:23:04.520 --rc genhtml_function_coverage=1 00:23:04.520 --rc genhtml_legend=1 00:23:04.520 --rc geninfo_all_blocks=1 00:23:04.520 --rc geninfo_unexecuted_blocks=1 00:23:04.520 00:23:04.520 ' 00:23:04.520 06:49:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:23:04.520 06:49:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59227 00:23:04.520 06:49:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:23:04.520 06:49:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:23:04.520 06:49:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59227 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59227 ']' 00:23:04.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.520 06:49:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:04.520 [2024-12-06 06:49:37.029370] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:04.520 [2024-12-06 06:49:37.029746] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:23:04.779 [2024-12-06 06:49:37.223096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.779 [2024-12-06 06:49:37.357071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.779 [2024-12-06 06:49:37.357203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.779 [2024-12-06 06:49:37.357338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.779 [2024-12-06 06:49:37.357349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:23:05.713 06:49:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:05.713 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:05.713 POWER: Cannot set governor of lcore 0 to userspace 00:23:05.713 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:05.713 POWER: Cannot set governor of lcore 0 to performance 00:23:05.713 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:05.713 POWER: Cannot set governor of lcore 0 to userspace 00:23:05.713 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:05.713 POWER: Cannot set governor of lcore 0 to userspace 00:23:05.713 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:23:05.713 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:23:05.713 POWER: Unable to set Power Management Environment for lcore 0 00:23:05.713 [2024-12-06 06:49:38.088343] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:23:05.713 [2024-12-06 06:49:38.088480] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:23:05.713 [2024-12-06 06:49:38.088535] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:23:05.713 [2024-12-06 06:49:38.088654] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:23:05.713 [2024-12-06 06:49:38.088676] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:23:05.713 [2024-12-06 06:49:38.088691] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.713 06:49:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.713 06:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 [2024-12-06 06:49:38.380639] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:23:05.972 06:49:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.972 06:49:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:23:05.972 06:49:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:05.972 06:49:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.972 06:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 ************************************ 00:23:05.972 START TEST scheduler_create_thread 00:23:05.972 ************************************ 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 2 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 3 00:23:05.972 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 4 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 5 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 6 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 7 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 8 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 9 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 10 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.973 06:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:07.873 06:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.873 06:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:23:07.873 06:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:23:07.873 06:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.873 06:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:08.439 ************************************ 00:23:08.440 END TEST scheduler_create_thread 00:23:08.440 ************************************ 00:23:08.440 06:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.440 00:23:08.440 real 0m2.620s 00:23:08.440 user 0m0.018s 00:23:08.440 sys 0m0.007s 00:23:08.440 06:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.440 06:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:23:08.698 06:49:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:08.698 06:49:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59227 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59227 ']' 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59227 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59227 00:23:08.698 killing process with pid 59227 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59227' 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59227 00:23:08.698 06:49:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59227 00:23:08.957 [2024-12-06 06:49:41.491317] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:23:10.342 00:23:10.342 real 0m5.786s 00:23:10.342 user 0m10.487s 00:23:10.342 sys 0m0.443s 00:23:10.342 ************************************ 00:23:10.342 END TEST event_scheduler 00:23:10.342 ************************************ 00:23:10.342 06:49:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.342 06:49:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:23:10.342 06:49:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:23:10.342 06:49:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:23:10.342 06:49:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:10.342 06:49:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.342 06:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:23:10.342 ************************************ 00:23:10.342 START TEST app_repeat 00:23:10.342 ************************************ 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59339 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:23:10.342 Process app_repeat pid: 59339 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59339' 00:23:10.342 spdk_app_start Round 0 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:23:10.342 06:49:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59339 /var/tmp/spdk-nbd.sock 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59339 ']' 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:10.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.342 06:49:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:10.342 [2024-12-06 06:49:42.645958] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:10.342 [2024-12-06 06:49:42.646170] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:23:10.342 [2024-12-06 06:49:42.853472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:10.601 [2024-12-06 06:49:42.986231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.601 [2024-12-06 06:49:42.986243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.168 06:49:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.168 06:49:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:23:11.168 06:49:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:11.428 Malloc0 00:23:11.687 06:49:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:11.945 Malloc1 00:23:11.945 06:49:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.945 06:49:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:12.203 /dev/nbd0 00:23:12.203 06:49:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:12.203 06:49:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:12.203 06:49:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:12.204 1+0 records in 00:23:12.204 1+0 records out 00:23:12.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318715 s, 12.9 MB/s 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:12.204 06:49:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:12.204 06:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:12.204 06:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:12.204 06:49:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:12.461 /dev/nbd1 00:23:12.461 06:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:12.718 1+0 records in 00:23:12.718 1+0 records out 00:23:12.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315988 s, 13.0 MB/s 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:12.718 06:49:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:12.718 06:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:12.976 { 00:23:12.976 "nbd_device": "/dev/nbd0", 00:23:12.976 "bdev_name": "Malloc0" 00:23:12.976 }, 00:23:12.976 { 00:23:12.976 "nbd_device": "/dev/nbd1", 00:23:12.976 "bdev_name": "Malloc1" 00:23:12.976 } 00:23:12.976 ]' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:12.976 { 00:23:12.976 "nbd_device": "/dev/nbd0", 00:23:12.976 "bdev_name": "Malloc0" 00:23:12.976 }, 00:23:12.976 { 00:23:12.976 "nbd_device": "/dev/nbd1", 00:23:12.976 "bdev_name": "Malloc1" 00:23:12.976 } 00:23:12.976 ]' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:12.976 /dev/nbd1' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:12.976 /dev/nbd1' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:12.976 256+0 records in 00:23:12.976 256+0 records out 00:23:12.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641021 s, 164 MB/s 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:12.976 256+0 records in 00:23:12.976 256+0 records out 00:23:12.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286446 s, 36.6 MB/s 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:12.976 256+0 records in 00:23:12.976 256+0 records out 00:23:12.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289984 s, 36.2 MB/s 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.976 06:49:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.540 06:49:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:13.797 06:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:14.055 06:49:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:14.055 06:49:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:14.620 06:49:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:15.551 [2024-12-06 06:49:47.959554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:15.551 [2024-12-06 06:49:48.058182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.551 [2024-12-06 06:49:48.058186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.808 [2024-12-06 06:49:48.223693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:15.808 [2024-12-06 06:49:48.223820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:17.709 06:49:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:17.709 spdk_app_start Round 1 00:23:17.709 06:49:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:23:17.709 06:49:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59339 /var/tmp/spdk-nbd.sock 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59339 ']' 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.709 06:49:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:17.709 06:49:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.709 06:49:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:23:17.709 06:49:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:18.275 Malloc0 00:23:18.275 06:49:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:18.533 Malloc1 00:23:18.533 06:49:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:18.533 06:49:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:18.792 /dev/nbd0 00:23:18.792 06:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:18.792 06:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:18.792 1+0 records in 00:23:18.792 1+0 records out 00:23:18.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319286 s, 12.8 MB/s 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:18.792 06:49:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:18.792 06:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:18.792 06:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:18.792 06:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:19.050 /dev/nbd1 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:19.050 1+0 records in 00:23:19.050 1+0 records out 00:23:19.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369243 s, 11.1 MB/s 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:19.050 06:49:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:19.050 06:49:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:19.309 06:49:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:19.309 { 00:23:19.309 "nbd_device": "/dev/nbd0", 00:23:19.309 "bdev_name": "Malloc0" 00:23:19.309 }, 00:23:19.309 { 00:23:19.309 "nbd_device": "/dev/nbd1", 00:23:19.309 "bdev_name": "Malloc1" 00:23:19.309 } 00:23:19.309 ]' 00:23:19.309 06:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:19.309 { 00:23:19.309 "nbd_device": "/dev/nbd0", 00:23:19.309 "bdev_name": "Malloc0" 00:23:19.309 }, 00:23:19.309 { 00:23:19.309 "nbd_device": "/dev/nbd1", 00:23:19.309 "bdev_name": "Malloc1" 00:23:19.309 } 00:23:19.309 ]' 00:23:19.309 06:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:19.567 /dev/nbd1' 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:19.567 /dev/nbd1' 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:19.567 256+0 records in 00:23:19.567 256+0 records out 00:23:19.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658556 s, 159 MB/s 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:19.567 06:49:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:19.567 256+0 records in 00:23:19.567 256+0 records out 00:23:19.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0413022 s, 25.4 MB/s 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:19.567 256+0 records in 00:23:19.567 256+0 records out 00:23:19.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310051 s, 33.8 MB/s 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.567 06:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.826 06:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:20.084 06:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:20.084 06:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:20.084 06:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:20.084 06:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.084 06:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:20.085 06:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:20.652 06:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:20.652 06:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:20.652 06:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:20.652 06:49:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:20.652 06:49:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:21.218 06:49:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:22.153 [2024-12-06 06:49:54.591672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:22.153 [2024-12-06 06:49:54.690579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.153 [2024-12-06 06:49:54.690589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.412 [2024-12-06 06:49:54.855735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:22.412 [2024-12-06 06:49:54.855850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:24.354 spdk_app_start Round 2 00:23:24.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:24.354 06:49:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:24.354 06:49:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:24.354 06:49:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59339 /var/tmp/spdk-nbd.sock 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59339 ']' 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.354 06:49:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:23:24.354 06:49:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:24.612 Malloc0 00:23:24.612 06:49:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:25.178 Malloc1 00:23:25.178 06:49:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:25.178 06:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:25.437 /dev/nbd0 00:23:25.437 06:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:25.437 06:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:25.437 1+0 records in 00:23:25.437 1+0 records out 00:23:25.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251221 s, 16.3 MB/s 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:25.437 06:49:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:25.437 06:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.437 06:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:25.437 06:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:25.695 /dev/nbd1 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:25.695 1+0 records in 00:23:25.695 1+0 records out 00:23:25.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296557 s, 13.8 MB/s 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:25.695 06:49:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:25.695 06:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:25.953 { 00:23:25.953 "nbd_device": "/dev/nbd0", 00:23:25.953 "bdev_name": "Malloc0" 00:23:25.953 }, 00:23:25.953 { 00:23:25.953 "nbd_device": "/dev/nbd1", 00:23:25.953 "bdev_name": "Malloc1" 00:23:25.953 } 00:23:25.953 ]' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:25.953 { 00:23:25.953 "nbd_device": "/dev/nbd0", 00:23:25.953 "bdev_name": "Malloc0" 00:23:25.953 }, 00:23:25.953 { 00:23:25.953 "nbd_device": "/dev/nbd1", 00:23:25.953 "bdev_name": "Malloc1" 00:23:25.953 } 00:23:25.953 ]' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:25.953 /dev/nbd1' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:25.953 /dev/nbd1' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:25.953 06:49:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:26.212 256+0 records in 00:23:26.212 256+0 records out 00:23:26.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722049 s, 145 MB/s 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:26.212 256+0 records in 00:23:26.212 256+0 records out 00:23:26.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307286 s, 34.1 MB/s 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:26.212 256+0 records in 00:23:26.212 256+0 records out 00:23:26.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299602 s, 35.0 MB/s 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:26.212 06:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:26.470 06:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:26.728 06:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:26.987 06:49:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:26.987 06:49:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:27.554 06:50:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:28.488 [2024-12-06 06:50:01.062747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:28.746 [2024-12-06 06:50:01.161748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.746 [2024-12-06 06:50:01.161759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.746 [2024-12-06 06:50:01.328602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:28.746 [2024-12-06 06:50:01.328729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:30.646 06:50:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59339 /var/tmp/spdk-nbd.sock 00:23:30.646 06:50:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59339 ']' 00:23:30.646 06:50:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:30.646 06:50:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.646 06:50:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:30.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:30.646 06:50:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.647 06:50:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:23:30.904 06:50:03 event.app_repeat -- event/event.sh@39 -- # killprocess 59339 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59339 ']' 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59339 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59339 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.904 killing process with pid 59339 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59339' 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59339 00:23:30.904 06:50:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59339 00:23:31.851 spdk_app_start is called in Round 0. 00:23:31.851 Shutdown signal received, stop current app iteration 00:23:31.851 Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 reinitialization... 00:23:31.851 spdk_app_start is called in Round 1. 00:23:31.851 Shutdown signal received, stop current app iteration 00:23:31.851 Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 reinitialization... 00:23:31.851 spdk_app_start is called in Round 2. 00:23:31.851 Shutdown signal received, stop current app iteration 00:23:31.851 Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 reinitialization... 00:23:31.851 spdk_app_start is called in Round 3. 00:23:31.851 Shutdown signal received, stop current app iteration 00:23:31.851 06:50:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:23:31.851 06:50:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:23:31.851 00:23:31.851 real 0m21.731s 00:23:31.851 user 0m48.602s 00:23:31.851 sys 0m2.806s 00:23:31.851 06:50:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.851 06:50:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:31.851 ************************************ 00:23:31.851 END TEST app_repeat 00:23:31.851 ************************************ 00:23:31.851 06:50:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:23:31.851 06:50:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:31.851 06:50:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:31.851 06:50:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.851 06:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:23:31.851 ************************************ 00:23:31.851 START TEST cpu_locks 00:23:31.851 ************************************ 00:23:31.851 06:50:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:31.851 * Looking for test storage... 00:23:31.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:31.851 06:50:04 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.851 06:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:23:32.161 06:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:32.161 06:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.161 06:50:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:23:32.161 06:50:04 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.161 06:50:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:32.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.161 --rc genhtml_branch_coverage=1 00:23:32.161 --rc genhtml_function_coverage=1 00:23:32.161 --rc genhtml_legend=1 00:23:32.161 --rc geninfo_all_blocks=1 00:23:32.161 --rc geninfo_unexecuted_blocks=1 00:23:32.161 00:23:32.162 ' 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:32.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.162 --rc genhtml_branch_coverage=1 00:23:32.162 --rc genhtml_function_coverage=1 00:23:32.162 --rc genhtml_legend=1 00:23:32.162 --rc geninfo_all_blocks=1 00:23:32.162 --rc geninfo_unexecuted_blocks=1 00:23:32.162 00:23:32.162 ' 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:32.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.162 --rc genhtml_branch_coverage=1 00:23:32.162 --rc genhtml_function_coverage=1 00:23:32.162 --rc genhtml_legend=1 00:23:32.162 --rc geninfo_all_blocks=1 00:23:32.162 --rc geninfo_unexecuted_blocks=1 00:23:32.162 00:23:32.162 ' 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:32.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.162 --rc genhtml_branch_coverage=1 00:23:32.162 --rc genhtml_function_coverage=1 00:23:32.162 --rc genhtml_legend=1 00:23:32.162 --rc geninfo_all_blocks=1 00:23:32.162 --rc geninfo_unexecuted_blocks=1 00:23:32.162 00:23:32.162 ' 00:23:32.162 06:50:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:23:32.162 06:50:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:23:32.162 06:50:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:23:32.162 06:50:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.162 06:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:32.162 ************************************ 00:23:32.162 START TEST default_locks 00:23:32.162 ************************************ 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59808 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59808 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59808 ']' 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.162 06:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:32.162 [2024-12-06 06:50:04.636588] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:32.162 [2024-12-06 06:50:04.636735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:23:32.420 [2024-12-06 06:50:04.816605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.420 [2024-12-06 06:50:04.919406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.351 06:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.351 06:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:23:33.351 06:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59808 00:23:33.351 06:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:33.351 06:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59808 ']' 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.610 killing process with pid 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59808' 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59808 00:23:33.610 06:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59808 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59808 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59808 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59808 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59808 ']' 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:36.141 ERROR: process (pid: 59808) is no longer running 00:23:36.141 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59808) - No such process 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:36.141 00:23:36.141 real 0m3.794s 00:23:36.141 user 0m3.956s 00:23:36.141 sys 0m0.602s 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.141 ************************************ 00:23:36.141 END TEST default_locks 00:23:36.141 ************************************ 00:23:36.141 06:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:36.141 06:50:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:23:36.141 06:50:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.141 06:50:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.141 06:50:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:36.141 ************************************ 00:23:36.141 START TEST default_locks_via_rpc 00:23:36.141 ************************************ 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59885 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59885 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59885 ']' 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.142 06:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 [2024-12-06 06:50:08.470899] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:36.142 [2024-12-06 06:50:08.471047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:23:36.142 [2024-12-06 06:50:08.642623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.400 [2024-12-06 06:50:08.745722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59885 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:36.967 06:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59885 ']' 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.535 killing process with pid 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59885' 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59885 00:23:37.535 06:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59885 00:23:40.068 00:23:40.068 real 0m3.769s 00:23:40.068 user 0m3.997s 00:23:40.068 sys 0m0.637s 00:23:40.068 06:50:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.068 06:50:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:40.068 ************************************ 00:23:40.068 END TEST default_locks_via_rpc 00:23:40.068 ************************************ 00:23:40.068 06:50:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:23:40.068 06:50:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:40.068 06:50:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.068 06:50:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:40.068 ************************************ 00:23:40.068 START TEST non_locking_app_on_locked_coremask 00:23:40.068 ************************************ 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59959 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59959 /var/tmp/spdk.sock 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59959 ']' 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.068 06:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:40.068 [2024-12-06 06:50:12.287932] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:40.068 [2024-12-06 06:50:12.288083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:23:40.068 [2024-12-06 06:50:12.458101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.068 [2024-12-06 06:50:12.561443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59975 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59975 /var/tmp/spdk2.sock 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59975 ']' 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.002 06:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:41.002 [2024-12-06 06:50:13.425410] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:41.002 [2024-12-06 06:50:13.425558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59975 ] 00:23:41.260 [2024-12-06 06:50:13.620871] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:41.260 [2024-12-06 06:50:13.620943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.260 [2024-12-06 06:50:13.835347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.162 06:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.162 06:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:23:43.162 06:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59959 00:23:43.162 06:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59959 00:23:43.162 06:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59959 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59959 ']' 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59959 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59959 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.726 killing process with pid 59959 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59959' 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59959 00:23:43.726 06:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59959 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59975 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59975 ']' 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59975 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59975 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.911 killing process with pid 59975 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59975' 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59975 00:23:47.911 06:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59975 00:23:50.460 00:23:50.460 real 0m10.347s 00:23:50.460 user 0m11.050s 00:23:50.460 sys 0m1.189s 00:23:50.460 06:50:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.460 06:50:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:50.460 ************************************ 00:23:50.460 END TEST non_locking_app_on_locked_coremask 00:23:50.460 ************************************ 00:23:50.460 06:50:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:23:50.460 06:50:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:50.460 06:50:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.460 06:50:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:50.460 ************************************ 00:23:50.460 START TEST locking_app_on_unlocked_coremask 00:23:50.460 ************************************ 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60115 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60115 /var/tmp/spdk.sock 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60115 ']' 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.460 06:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:50.460 [2024-12-06 06:50:22.734429] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:50.461 [2024-12-06 06:50:22.734608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:23:50.461 [2024-12-06 06:50:22.921896] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:50.461 [2024-12-06 06:50:22.922015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.461 [2024-12-06 06:50:23.048448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60131 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60131 /var/tmp/spdk2.sock 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60131 ']' 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:51.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.396 06:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:51.396 [2024-12-06 06:50:23.912858] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:23:51.396 [2024-12-06 06:50:23.913007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60131 ] 00:23:51.653 [2024-12-06 06:50:24.103994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.910 [2024-12-06 06:50:24.308564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.285 06:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.285 06:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:23:53.285 06:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60131 00:23:53.285 06:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60131 00:23:53.285 06:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60115 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60115 ']' 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60115 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60115 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.219 killing process with pid 60115 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60115' 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60115 00:23:54.219 06:50:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60115 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60131 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60131 ']' 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60131 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60131 00:23:58.558 killing process with pid 60131 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60131' 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60131 00:23:58.558 06:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60131 00:24:00.457 00:24:00.457 real 0m10.358s 00:24:00.457 user 0m10.927s 00:24:00.457 sys 0m1.177s 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:00.457 ************************************ 00:24:00.457 END TEST locking_app_on_unlocked_coremask 00:24:00.457 ************************************ 00:24:00.457 06:50:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:24:00.457 06:50:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.457 06:50:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.457 06:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:00.457 ************************************ 00:24:00.457 START TEST locking_app_on_locked_coremask 00:24:00.457 ************************************ 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60265 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60265 /var/tmp/spdk.sock 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60265 ']' 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.457 06:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:00.716 [2024-12-06 06:50:33.116920] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:00.716 [2024-12-06 06:50:33.117121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:24:00.975 [2024-12-06 06:50:33.305881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.975 [2024-12-06 06:50:33.431495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60288 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60288 /var/tmp/spdk2.sock 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60288 /var/tmp/spdk2.sock 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60288 /var/tmp/spdk2.sock 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60288 ']' 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.910 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:01.910 [2024-12-06 06:50:34.311661] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:01.910 [2024-12-06 06:50:34.311839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:24:02.168 [2024-12-06 06:50:34.503190] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60265 has claimed it. 00:24:02.168 [2024-12-06 06:50:34.503270] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:02.427 ERROR: process (pid: 60288) is no longer running 00:24:02.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60288) - No such process 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60265 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60265 00:24:02.427 06:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60265 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60265 ']' 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60265 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60265 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.993 killing process with pid 60265 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60265' 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60265 00:24:02.993 06:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60265 00:24:05.526 00:24:05.526 real 0m4.588s 00:24:05.526 user 0m5.003s 00:24:05.526 sys 0m0.759s 00:24:05.526 ************************************ 00:24:05.526 END TEST locking_app_on_locked_coremask 00:24:05.526 ************************************ 00:24:05.526 06:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.526 06:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:05.526 06:50:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:24:05.526 06:50:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.526 06:50:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.526 06:50:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:05.526 ************************************ 00:24:05.526 START TEST locking_overlapped_coremask 00:24:05.526 ************************************ 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60352 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60352 /var/tmp/spdk.sock 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60352 ']' 00:24:05.526 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.527 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.527 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.527 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.527 06:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:05.527 [2024-12-06 06:50:37.751932] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:05.527 [2024-12-06 06:50:37.752296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60352 ] 00:24:05.527 [2024-12-06 06:50:37.929059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:05.527 [2024-12-06 06:50:38.086649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.527 [2024-12-06 06:50:38.086737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.527 [2024-12-06 06:50:38.086742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60370 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60370 /var/tmp/spdk2.sock 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60370 /var/tmp/spdk2.sock 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:24:06.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60370 /var/tmp/spdk2.sock 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.465 06:50:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:06.465 [2024-12-06 06:50:38.996080] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:06.465 [2024-12-06 06:50:38.996249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60370 ] 00:24:06.724 [2024-12-06 06:50:39.195591] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60352 has claimed it. 00:24:06.724 [2024-12-06 06:50:39.195684] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:24:07.292 ERROR: process (pid: 60370) is no longer running 00:24:07.292 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60370) - No such process 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60352 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60352 ']' 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60352 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60352 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60352' 00:24:07.292 killing process with pid 60352 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60352 00:24:07.292 06:50:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60352 00:24:09.367 00:24:09.367 real 0m4.269s 00:24:09.367 user 0m11.704s 00:24:09.367 sys 0m0.557s 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:09.367 ************************************ 00:24:09.367 END TEST locking_overlapped_coremask 00:24:09.367 ************************************ 00:24:09.367 06:50:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:24:09.367 06:50:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:09.367 06:50:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.367 06:50:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:09.367 ************************************ 00:24:09.367 START TEST locking_overlapped_coremask_via_rpc 00:24:09.367 ************************************ 00:24:09.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60434 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60434 /var/tmp/spdk.sock 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60434 ']' 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.367 06:50:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.625 [2024-12-06 06:50:42.059749] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:09.625 [2024-12-06 06:50:42.059919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60434 ] 00:24:09.884 [2024-12-06 06:50:42.247488] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:09.884 [2024-12-06 06:50:42.247555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.884 [2024-12-06 06:50:42.355111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.884 [2024-12-06 06:50:42.355219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.884 [2024-12-06 06:50:42.355227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60452 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60452 /var/tmp/spdk2.sock 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60452 ']' 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.820 06:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 [2024-12-06 06:50:43.241325] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:10.820 [2024-12-06 06:50:43.241743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:24:11.079 [2024-12-06 06:50:43.451988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:11.079 [2024-12-06 06:50:43.452054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.079 [2024-12-06 06:50:43.667573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.079 [2024-12-06 06:50:43.667698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.079 [2024-12-06 06:50:43.667762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:12.979 [2024-12-06 06:50:45.228902] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60434 has claimed it. 00:24:12.979 request: 00:24:12.979 { 00:24:12.979 "method": "framework_enable_cpumask_locks", 00:24:12.979 "req_id": 1 00:24:12.979 } 00:24:12.979 Got JSON-RPC error response 00:24:12.979 response: 00:24:12.979 { 00:24:12.979 "code": -32603, 00:24:12.979 "message": "Failed to claim CPU core: 2" 00:24:12.979 } 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60434 /var/tmp/spdk.sock 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60434 ']' 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.979 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60452 /var/tmp/spdk2.sock 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60452 ']' 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:13.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.237 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:13.494 00:24:13.494 real 0m3.951s 00:24:13.494 user 0m1.692s 00:24:13.494 sys 0m0.195s 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.494 06:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.494 ************************************ 00:24:13.494 END TEST locking_overlapped_coremask_via_rpc 00:24:13.494 ************************************ 00:24:13.494 06:50:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:24:13.494 06:50:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60434 ]] 00:24:13.494 06:50:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60434 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60434 ']' 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60434 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60434 00:24:13.494 killing process with pid 60434 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60434' 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60434 00:24:13.494 06:50:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60434 00:24:16.024 06:50:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60452 ]] 00:24:16.024 06:50:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60452 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60452 ']' 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60452 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60452 00:24:16.024 killing process with pid 60452 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60452' 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60452 00:24:16.024 06:50:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60452 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60434 ]] 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60434 00:24:17.928 Process with pid 60434 is not found 00:24:17.928 Process with pid 60452 is not found 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60434 ']' 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60434 00:24:17.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60434) - No such process 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60434 is not found' 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60452 ]] 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60452 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60452 ']' 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60452 00:24:17.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60452) - No such process 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60452 is not found' 00:24:17.928 06:50:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:17.928 00:24:17.928 real 0m46.041s 00:24:17.928 user 1m20.556s 00:24:17.928 sys 0m6.082s 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.928 06:50:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:17.928 ************************************ 00:24:17.928 END TEST cpu_locks 00:24:17.928 ************************************ 00:24:17.928 00:24:17.928 real 1m18.788s 00:24:17.928 user 2m26.945s 00:24:17.928 sys 0m9.862s 00:24:17.928 06:50:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.928 06:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:24:17.928 ************************************ 00:24:17.928 END TEST event 00:24:17.928 ************************************ 00:24:17.928 06:50:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:17.928 06:50:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:17.928 06:50:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.928 06:50:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.928 ************************************ 00:24:17.928 START TEST thread 00:24:17.928 ************************************ 00:24:17.928 06:50:50 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:18.187 * Looking for test storage... 00:24:18.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:18.187 06:50:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.187 06:50:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.187 06:50:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.187 06:50:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.187 06:50:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.187 06:50:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.187 06:50:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.187 06:50:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.187 06:50:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.187 06:50:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.187 06:50:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.187 06:50:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:24:18.187 06:50:50 thread -- scripts/common.sh@345 -- # : 1 00:24:18.187 06:50:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.187 06:50:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.187 06:50:50 thread -- scripts/common.sh@365 -- # decimal 1 00:24:18.187 06:50:50 thread -- scripts/common.sh@353 -- # local d=1 00:24:18.187 06:50:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.187 06:50:50 thread -- scripts/common.sh@355 -- # echo 1 00:24:18.187 06:50:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.187 06:50:50 thread -- scripts/common.sh@366 -- # decimal 2 00:24:18.187 06:50:50 thread -- scripts/common.sh@353 -- # local d=2 00:24:18.187 06:50:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.187 06:50:50 thread -- scripts/common.sh@355 -- # echo 2 00:24:18.187 06:50:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.187 06:50:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.187 06:50:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.187 06:50:50 thread -- scripts/common.sh@368 -- # return 0 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:18.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.187 --rc genhtml_branch_coverage=1 00:24:18.187 --rc genhtml_function_coverage=1 00:24:18.187 --rc genhtml_legend=1 00:24:18.187 --rc geninfo_all_blocks=1 00:24:18.187 --rc geninfo_unexecuted_blocks=1 00:24:18.187 00:24:18.187 ' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:18.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.187 --rc genhtml_branch_coverage=1 00:24:18.187 --rc genhtml_function_coverage=1 00:24:18.187 --rc genhtml_legend=1 00:24:18.187 --rc geninfo_all_blocks=1 00:24:18.187 --rc geninfo_unexecuted_blocks=1 00:24:18.187 00:24:18.187 ' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:18.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.187 --rc genhtml_branch_coverage=1 00:24:18.187 --rc genhtml_function_coverage=1 00:24:18.187 --rc genhtml_legend=1 00:24:18.187 --rc geninfo_all_blocks=1 00:24:18.187 --rc geninfo_unexecuted_blocks=1 00:24:18.187 00:24:18.187 ' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:18.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.187 --rc genhtml_branch_coverage=1 00:24:18.187 --rc genhtml_function_coverage=1 00:24:18.187 --rc genhtml_legend=1 00:24:18.187 --rc geninfo_all_blocks=1 00:24:18.187 --rc geninfo_unexecuted_blocks=1 00:24:18.187 00:24:18.187 ' 00:24:18.187 06:50:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.187 06:50:50 thread -- common/autotest_common.sh@10 -- # set +x 00:24:18.187 ************************************ 00:24:18.187 START TEST thread_poller_perf 00:24:18.187 ************************************ 00:24:18.187 06:50:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:18.187 [2024-12-06 06:50:50.726090] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:18.187 [2024-12-06 06:50:50.726436] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:24:18.446 [2024-12-06 06:50:50.910402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.446 [2024-12-06 06:50:51.013014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.446 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:24:19.822 [2024-12-06T06:50:52.413Z] ====================================== 00:24:19.822 [2024-12-06T06:50:52.413Z] busy:2213495577 (cyc) 00:24:19.822 [2024-12-06T06:50:52.413Z] total_run_count: 292000 00:24:19.822 [2024-12-06T06:50:52.413Z] tsc_hz: 2200000000 (cyc) 00:24:19.822 [2024-12-06T06:50:52.413Z] ====================================== 00:24:19.822 [2024-12-06T06:50:52.413Z] poller_cost: 7580 (cyc), 3445 (nsec) 00:24:19.822 00:24:19.822 real 0m1.556s 00:24:19.822 user 0m1.368s 00:24:19.822 sys 0m0.080s 00:24:19.822 ************************************ 00:24:19.822 END TEST thread_poller_perf 00:24:19.822 ************************************ 00:24:19.822 06:50:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.822 06:50:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.822 06:50:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:19.822 06:50:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:24:19.822 06:50:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.822 06:50:52 thread -- common/autotest_common.sh@10 -- # set +x 00:24:19.822 ************************************ 00:24:19.822 START TEST thread_poller_perf 00:24:19.822 ************************************ 00:24:19.822 06:50:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:19.822 [2024-12-06 06:50:52.331365] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:19.822 [2024-12-06 06:50:52.331514] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:24:20.081 [2024-12-06 06:50:52.503437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.081 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:24:20.081 [2024-12-06 06:50:52.610057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.459 [2024-12-06T06:50:54.050Z] ====================================== 00:24:21.459 [2024-12-06T06:50:54.050Z] busy:2204138746 (cyc) 00:24:21.459 [2024-12-06T06:50:54.050Z] total_run_count: 3251000 00:24:21.459 [2024-12-06T06:50:54.050Z] tsc_hz: 2200000000 (cyc) 00:24:21.459 [2024-12-06T06:50:54.050Z] ====================================== 00:24:21.459 [2024-12-06T06:50:54.050Z] poller_cost: 677 (cyc), 307 (nsec) 00:24:21.459 ************************************ 00:24:21.459 END TEST thread_poller_perf 00:24:21.459 ************************************ 00:24:21.459 00:24:21.459 real 0m1.542s 00:24:21.459 user 0m1.350s 00:24:21.459 sys 0m0.083s 00:24:21.459 06:50:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.459 06:50:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.459 06:50:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:24:21.459 00:24:21.459 real 0m3.389s 00:24:21.459 user 0m2.873s 00:24:21.459 sys 0m0.292s 00:24:21.459 ************************************ 00:24:21.459 END TEST thread 00:24:21.459 ************************************ 00:24:21.459 06:50:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.459 06:50:53 thread -- common/autotest_common.sh@10 -- # set +x 00:24:21.459 06:50:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:24:21.459 06:50:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:24:21.459 06:50:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:21.459 06:50:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.459 06:50:53 -- common/autotest_common.sh@10 -- # set +x 00:24:21.459 ************************************ 00:24:21.459 START TEST app_cmdline 00:24:21.459 ************************************ 00:24:21.459 06:50:53 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:24:21.459 * Looking for test storage... 00:24:21.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:24:21.459 06:50:54 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:21.459 06:50:54 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:24:21.459 06:50:54 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:21.718 06:50:54 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.718 06:50:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:24:21.718 06:50:54 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.718 06:50:54 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:21.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.718 --rc genhtml_branch_coverage=1 00:24:21.718 --rc genhtml_function_coverage=1 00:24:21.718 --rc genhtml_legend=1 00:24:21.718 --rc geninfo_all_blocks=1 00:24:21.718 --rc geninfo_unexecuted_blocks=1 00:24:21.718 00:24:21.718 ' 00:24:21.718 06:50:54 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:21.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.718 --rc genhtml_branch_coverage=1 00:24:21.718 --rc genhtml_function_coverage=1 00:24:21.718 --rc genhtml_legend=1 00:24:21.718 --rc geninfo_all_blocks=1 00:24:21.718 --rc geninfo_unexecuted_blocks=1 00:24:21.718 00:24:21.718 ' 00:24:21.718 06:50:54 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:21.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.718 --rc genhtml_branch_coverage=1 00:24:21.718 --rc genhtml_function_coverage=1 00:24:21.718 --rc genhtml_legend=1 00:24:21.718 --rc geninfo_all_blocks=1 00:24:21.718 --rc geninfo_unexecuted_blocks=1 00:24:21.718 00:24:21.719 ' 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:21.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.719 --rc genhtml_branch_coverage=1 00:24:21.719 --rc genhtml_function_coverage=1 00:24:21.719 --rc genhtml_legend=1 00:24:21.719 --rc geninfo_all_blocks=1 00:24:21.719 --rc geninfo_unexecuted_blocks=1 00:24:21.719 00:24:21.719 ' 00:24:21.719 06:50:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:24:21.719 06:50:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60765 00:24:21.719 06:50:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60765 00:24:21.719 06:50:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60765 ']' 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.719 06:50:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:21.719 [2024-12-06 06:50:54.219352] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:21.719 [2024-12-06 06:50:54.219748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:24:21.989 [2024-12-06 06:50:54.407354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.989 [2024-12-06 06:50:54.512707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.985 06:50:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.985 06:50:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:24:22.986 06:50:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:24:22.986 { 00:24:22.986 "version": "SPDK v25.01-pre git sha1 f501a7223", 00:24:22.986 "fields": { 00:24:22.986 "major": 25, 00:24:22.986 "minor": 1, 00:24:22.986 "patch": 0, 00:24:22.986 "suffix": "-pre", 00:24:22.986 "commit": "f501a7223" 00:24:22.986 } 00:24:22.986 } 00:24:22.986 06:50:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:24:22.986 06:50:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:24:22.986 06:50:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:24:22.986 06:50:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:24:23.244 06:50:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:23.244 06:50:55 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:24:23.503 request: 00:24:23.503 { 00:24:23.503 "method": "env_dpdk_get_mem_stats", 00:24:23.503 "req_id": 1 00:24:23.503 } 00:24:23.503 Got JSON-RPC error response 00:24:23.503 response: 00:24:23.503 { 00:24:23.503 "code": -32601, 00:24:23.503 "message": "Method not found" 00:24:23.503 } 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.503 06:50:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60765 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60765 ']' 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60765 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60765 00:24:23.503 killing process with pid 60765 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60765' 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 60765 00:24:23.503 06:50:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 60765 00:24:26.039 00:24:26.039 real 0m4.101s 00:24:26.039 user 0m4.761s 00:24:26.039 sys 0m0.534s 00:24:26.039 06:50:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.039 06:50:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:24:26.039 ************************************ 00:24:26.039 END TEST app_cmdline 00:24:26.039 ************************************ 00:24:26.039 06:50:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:24:26.039 06:50:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:26.039 06:50:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.039 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:24:26.039 ************************************ 00:24:26.039 START TEST version 00:24:26.039 ************************************ 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:24:26.039 * Looking for test storage... 00:24:26.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.039 06:50:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.039 06:50:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.039 06:50:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.039 06:50:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.039 06:50:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.039 06:50:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.039 06:50:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.039 06:50:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.039 06:50:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.039 06:50:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.039 06:50:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.039 06:50:58 version -- scripts/common.sh@344 -- # case "$op" in 00:24:26.039 06:50:58 version -- scripts/common.sh@345 -- # : 1 00:24:26.039 06:50:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.039 06:50:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.039 06:50:58 version -- scripts/common.sh@365 -- # decimal 1 00:24:26.039 06:50:58 version -- scripts/common.sh@353 -- # local d=1 00:24:26.039 06:50:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.039 06:50:58 version -- scripts/common.sh@355 -- # echo 1 00:24:26.039 06:50:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.039 06:50:58 version -- scripts/common.sh@366 -- # decimal 2 00:24:26.039 06:50:58 version -- scripts/common.sh@353 -- # local d=2 00:24:26.039 06:50:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.039 06:50:58 version -- scripts/common.sh@355 -- # echo 2 00:24:26.039 06:50:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.039 06:50:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.039 06:50:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.039 06:50:58 version -- scripts/common.sh@368 -- # return 0 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.039 --rc genhtml_branch_coverage=1 00:24:26.039 --rc genhtml_function_coverage=1 00:24:26.039 --rc genhtml_legend=1 00:24:26.039 --rc geninfo_all_blocks=1 00:24:26.039 --rc geninfo_unexecuted_blocks=1 00:24:26.039 00:24:26.039 ' 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.039 --rc genhtml_branch_coverage=1 00:24:26.039 --rc genhtml_function_coverage=1 00:24:26.039 --rc genhtml_legend=1 00:24:26.039 --rc geninfo_all_blocks=1 00:24:26.039 --rc geninfo_unexecuted_blocks=1 00:24:26.039 00:24:26.039 ' 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.039 --rc genhtml_branch_coverage=1 00:24:26.039 --rc genhtml_function_coverage=1 00:24:26.039 --rc genhtml_legend=1 00:24:26.039 --rc geninfo_all_blocks=1 00:24:26.039 --rc geninfo_unexecuted_blocks=1 00:24:26.039 00:24:26.039 ' 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.039 --rc genhtml_branch_coverage=1 00:24:26.039 --rc genhtml_function_coverage=1 00:24:26.039 --rc genhtml_legend=1 00:24:26.039 --rc geninfo_all_blocks=1 00:24:26.039 --rc geninfo_unexecuted_blocks=1 00:24:26.039 00:24:26.039 ' 00:24:26.039 06:50:58 version -- app/version.sh@17 -- # get_header_version major 00:24:26.039 06:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # cut -f2 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:24:26.039 06:50:58 version -- app/version.sh@17 -- # major=25 00:24:26.039 06:50:58 version -- app/version.sh@18 -- # get_header_version minor 00:24:26.039 06:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # cut -f2 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:24:26.039 06:50:58 version -- app/version.sh@18 -- # minor=1 00:24:26.039 06:50:58 version -- app/version.sh@19 -- # get_header_version patch 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # cut -f2 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:24:26.039 06:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:26.039 06:50:58 version -- app/version.sh@19 -- # patch=0 00:24:26.039 06:50:58 version -- app/version.sh@20 -- # get_header_version suffix 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # cut -f2 00:24:26.039 06:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:24:26.039 06:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:24:26.039 06:50:58 version -- app/version.sh@20 -- # suffix=-pre 00:24:26.039 06:50:58 version -- app/version.sh@22 -- # version=25.1 00:24:26.039 06:50:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:24:26.039 06:50:58 version -- app/version.sh@28 -- # version=25.1rc0 00:24:26.039 06:50:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:26.039 06:50:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:24:26.039 06:50:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:24:26.039 06:50:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:24:26.039 00:24:26.039 real 0m0.246s 00:24:26.039 user 0m0.162s 00:24:26.039 sys 0m0.119s 00:24:26.039 ************************************ 00:24:26.039 END TEST version 00:24:26.039 ************************************ 00:24:26.039 06:50:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.039 06:50:58 version -- common/autotest_common.sh@10 -- # set +x 00:24:26.039 06:50:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:24:26.039 06:50:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:24:26.039 06:50:58 -- spdk/autotest.sh@194 -- # uname -s 00:24:26.039 06:50:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:26.039 06:50:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:26.039 06:50:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:26.039 06:50:58 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:24:26.039 06:50:58 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:24:26.039 06:50:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.039 06:50:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.039 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:24:26.039 ************************************ 00:24:26.039 START TEST blockdev_nvme 00:24:26.039 ************************************ 00:24:26.039 06:50:58 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:24:26.039 * Looking for test storage... 00:24:26.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:26.039 06:50:58 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.039 06:50:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.039 06:50:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.039 06:50:58 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:24:26.039 06:50:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.040 06:50:58 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.040 --rc genhtml_branch_coverage=1 00:24:26.040 --rc genhtml_function_coverage=1 00:24:26.040 --rc genhtml_legend=1 00:24:26.040 --rc geninfo_all_blocks=1 00:24:26.040 --rc geninfo_unexecuted_blocks=1 00:24:26.040 00:24:26.040 ' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.040 --rc genhtml_branch_coverage=1 00:24:26.040 --rc genhtml_function_coverage=1 00:24:26.040 --rc genhtml_legend=1 00:24:26.040 --rc geninfo_all_blocks=1 00:24:26.040 --rc geninfo_unexecuted_blocks=1 00:24:26.040 00:24:26.040 ' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.040 --rc genhtml_branch_coverage=1 00:24:26.040 --rc genhtml_function_coverage=1 00:24:26.040 --rc genhtml_legend=1 00:24:26.040 --rc geninfo_all_blocks=1 00:24:26.040 --rc geninfo_unexecuted_blocks=1 00:24:26.040 00:24:26.040 ' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.040 --rc genhtml_branch_coverage=1 00:24:26.040 --rc genhtml_function_coverage=1 00:24:26.040 --rc genhtml_legend=1 00:24:26.040 --rc geninfo_all_blocks=1 00:24:26.040 --rc geninfo_unexecuted_blocks=1 00:24:26.040 00:24:26.040 ' 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:26.040 06:50:58 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:24:26.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60948 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60948 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60948 ']' 00:24:26.040 06:50:58 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.040 06:50:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:26.299 [2024-12-06 06:50:58.699904] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:26.299 [2024-12-06 06:50:58.700259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:24:26.299 [2024-12-06 06:50:58.882786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.559 [2024-12-06 06:50:58.986289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.496 06:50:59 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.496 06:50:59 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:27.496 06:50:59 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:24:27.496 06:50:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.496 06:50:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:27.755 06:51:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:24:27.755 06:51:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:24:27.756 06:51:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "835604b2-af31-45a9-b42d-fb277deb0890"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "835604b2-af31-45a9-b42d-fb277deb0890",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d6f88d97-93d5-41a7-b6ee-96ba214abe4e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d6f88d97-93d5-41a7-b6ee-96ba214abe4e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e621b9be-685a-4bf3-9b3f-a1834f87305a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e621b9be-685a-4bf3-9b3f-a1834f87305a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7f4d2350-27e1-4bc6-b917-e5974bc6d4de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f4d2350-27e1-4bc6-b917-e5974bc6d4de",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7e3371ba-8189-42a1-824b-7c680943bd15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e3371ba-8189-42a1-824b-7c680943bd15",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d7040db7-f8b8-4d1a-b935-a89f6b3e3fce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d7040db7-f8b8-4d1a-b935-a89f6b3e3fce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:24:28.016 06:51:00 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:24:28.016 06:51:00 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:24:28.016 06:51:00 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:24:28.016 06:51:00 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60948 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60948 ']' 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60948 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60948 00:24:28.016 killing process with pid 60948 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60948' 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60948 00:24:28.016 06:51:00 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60948 00:24:30.007 06:51:02 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:30.007 06:51:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:30.007 06:51:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:30.007 06:51:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.007 06:51:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.007 ************************************ 00:24:30.007 START TEST bdev_hello_world 00:24:30.007 ************************************ 00:24:30.007 06:51:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:30.007 [2024-12-06 06:51:02.540433] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:30.007 [2024-12-06 06:51:02.540860] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61042 ] 00:24:30.266 [2024-12-06 06:51:02.715669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.266 [2024-12-06 06:51:02.819682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.201 [2024-12-06 06:51:03.442554] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:31.201 [2024-12-06 06:51:03.442626] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:24:31.201 [2024-12-06 06:51:03.442658] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:31.201 [2024-12-06 06:51:03.445766] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:31.201 [2024-12-06 06:51:03.446279] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:31.201 [2024-12-06 06:51:03.446325] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:31.201 [2024-12-06 06:51:03.446512] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:31.201 00:24:31.201 [2024-12-06 06:51:03.446546] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:32.138 00:24:32.138 real 0m1.990s 00:24:32.138 user 0m1.678s 00:24:32.138 sys 0m0.202s 00:24:32.138 06:51:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.138 06:51:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:32.138 ************************************ 00:24:32.138 END TEST bdev_hello_world 00:24:32.138 ************************************ 00:24:32.138 06:51:04 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:24:32.138 06:51:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.138 06:51:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.138 06:51:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:32.138 ************************************ 00:24:32.138 START TEST bdev_bounds 00:24:32.138 ************************************ 00:24:32.138 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:32.138 Process bdevio pid: 61080 00:24:32.138 06:51:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61080 00:24:32.138 06:51:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:32.138 06:51:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:32.138 06:51:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61080' 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61080 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61080 ']' 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.139 06:51:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:32.139 [2024-12-06 06:51:04.581770] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:32.139 [2024-12-06 06:51:04.581915] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61080 ] 00:24:32.397 [2024-12-06 06:51:04.757675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.397 [2024-12-06 06:51:04.866800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.397 [2024-12-06 06:51:04.866926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.397 [2024-12-06 06:51:04.866948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.331 06:51:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.331 06:51:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:33.331 06:51:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:33.331 I/O targets: 00:24:33.331 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:33.331 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:33.331 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:33.331 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:33.331 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:33.331 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:33.331 00:24:33.331 00:24:33.331 CUnit - A unit testing framework for C - Version 2.1-3 00:24:33.331 http://cunit.sourceforge.net/ 00:24:33.331 00:24:33.331 00:24:33.331 Suite: bdevio tests on: Nvme3n1 00:24:33.331 Test: blockdev write read block ...passed 00:24:33.331 Test: blockdev write zeroes read block ...passed 00:24:33.331 Test: blockdev write zeroes read no split ...passed 00:24:33.331 Test: blockdev write zeroes read split ...passed 00:24:33.331 Test: blockdev write zeroes read split partial ...passed 00:24:33.331 Test: blockdev reset ...[2024-12-06 06:51:05.786858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:24:33.331 passed 00:24:33.331 Test: blockdev write read 8 blocks ...[2024-12-06 06:51:05.791519] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:24:33.331 passed 00:24:33.331 Test: blockdev write read size > 128k ...passed 00:24:33.331 Test: blockdev write read invalid size ...passed 00:24:33.331 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.331 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.331 Test: blockdev write read max offset ...passed 00:24:33.331 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.331 Test: blockdev writev readv 8 blocks ...passed 00:24:33.331 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.331 Test: blockdev writev readv block ...passed 00:24:33.331 Test: blockdev writev readv size > 128k ...passed 00:24:33.331 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.331 Test: blockdev comparev and writev ...[2024-12-06 06:51:05.800212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e0a000 len:0x1000 00:24:33.331 [2024-12-06 06:51:05.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:33.331 passed 00:24:33.331 Test: blockdev nvme passthru rw ...passed 00:24:33.331 Test: blockdev nvme passthru vendor specific ...passed 00:24:33.331 Test: blockdev nvme admin passthru ...[2024-12-06 06:51:05.801266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:33.331 [2024-12-06 06:51:05.801333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:33.331 passed 00:24:33.331 Test: blockdev copy ...passed 00:24:33.331 Suite: bdevio tests on: Nvme2n3 00:24:33.331 Test: blockdev write read block ...passed 00:24:33.331 Test: blockdev write zeroes read block ...passed 00:24:33.331 Test: blockdev write zeroes read no split ...passed 00:24:33.331 Test: blockdev write zeroes read split ...passed 00:24:33.331 Test: blockdev write zeroes read split partial ...passed 00:24:33.331 Test: blockdev reset ...[2024-12-06 06:51:05.867296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:33.331 [2024-12-06 06:51:05.872527] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:33.331 passed 00:24:33.331 Test: blockdev write read 8 blocks ...passed 00:24:33.331 Test: blockdev write read size > 128k ...passed 00:24:33.331 Test: blockdev write read invalid size ...passed 00:24:33.331 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.332 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.332 Test: blockdev write read max offset ...passed 00:24:33.332 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.332 Test: blockdev writev readv 8 blocks ...passed 00:24:33.332 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.332 Test: blockdev writev readv block ...passed 00:24:33.332 Test: blockdev writev readv size > 128k ...passed 00:24:33.332 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.332 Test: blockdev comparev and writev ...[2024-12-06 06:51:05.884361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a8806000 len:0x1000 00:24:33.332 [2024-12-06 06:51:05.884438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:33.332 passed 00:24:33.332 Test: blockdev nvme passthru rw ...passed 00:24:33.332 Test: blockdev nvme passthru vendor specific ...passed 00:24:33.332 Test: blockdev nvme admin passthru ...[2024-12-06 06:51:05.885376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:33.332 [2024-12-06 06:51:05.885437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:33.332 passed 00:24:33.332 Test: blockdev copy ...passed 00:24:33.332 Suite: bdevio tests on: Nvme2n2 00:24:33.332 Test: blockdev write read block ...passed 00:24:33.332 Test: blockdev write zeroes read block ...passed 00:24:33.332 Test: blockdev write zeroes read no split ...passed 00:24:33.591 Test: blockdev write zeroes read split ...passed 00:24:33.591 Test: blockdev write zeroes read split partial ...passed 00:24:33.591 Test: blockdev reset ...[2024-12-06 06:51:05.956972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:33.591 passed 00:24:33.591 Test: blockdev write read 8 blocks ...[2024-12-06 06:51:05.961506] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:33.591 passed 00:24:33.591 Test: blockdev write read size > 128k ...passed 00:24:33.591 Test: blockdev write read invalid size ...passed 00:24:33.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.591 Test: blockdev write read max offset ...passed 00:24:33.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.591 Test: blockdev writev readv 8 blocks ...passed 00:24:33.591 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.591 Test: blockdev writev readv block ...passed 00:24:33.591 Test: blockdev writev readv size > 128k ...passed 00:24:33.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.591 Test: blockdev comparev and writev ...[2024-12-06 06:51:05.969891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e3c000 len:0x1000 00:24:33.591 [2024-12-06 06:51:05.969956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev nvme passthru rw ...passed 00:24:33.591 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:05.970845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:24:33.591 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:24:33.591 [2024-12-06 06:51:05.971017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev copy ...passed 00:24:33.591 Suite: bdevio tests on: Nvme2n1 00:24:33.591 Test: blockdev write read block ...passed 00:24:33.591 Test: blockdev write zeroes read block ...passed 00:24:33.591 Test: blockdev write zeroes read no split ...passed 00:24:33.591 Test: blockdev write zeroes read split ...passed 00:24:33.591 Test: blockdev write zeroes read split partial ...passed 00:24:33.591 Test: blockdev reset ...[2024-12-06 06:51:06.049385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:33.591 [2024-12-06 06:51:06.053911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:33.591 passed 00:24:33.591 Test: blockdev write read 8 blocks ...passed 00:24:33.591 Test: blockdev write read size > 128k ...passed 00:24:33.591 Test: blockdev write read invalid size ...passed 00:24:33.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.591 Test: blockdev write read max offset ...passed 00:24:33.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.591 Test: blockdev writev readv 8 blocks ...passed 00:24:33.591 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.591 Test: blockdev writev readv block ...passed 00:24:33.591 Test: blockdev writev readv size > 128k ...passed 00:24:33.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.591 Test: blockdev comparev and writev ...[2024-12-06 06:51:06.063419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e38000 len:0x1000 00:24:33.591 [2024-12-06 06:51:06.063526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev nvme passthru rw ...passed 00:24:33.591 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:06.064515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:33.591 [2024-12-06 06:51:06.064568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev nvme admin passthru ...passed 00:24:33.591 Test: blockdev copy ...passed 00:24:33.591 Suite: bdevio tests on: Nvme1n1 00:24:33.591 Test: blockdev write read block ...passed 00:24:33.591 Test: blockdev write zeroes read block ...passed 00:24:33.591 Test: blockdev write zeroes read no split ...passed 00:24:33.591 Test: blockdev write zeroes read split ...passed 00:24:33.591 Test: blockdev write zeroes read split partial ...passed 00:24:33.591 Test: blockdev reset ...[2024-12-06 06:51:06.138930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:24:33.591 passed 00:24:33.591 Test: blockdev write read 8 blocks ...[2024-12-06 06:51:06.142523] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:24:33.591 passed 00:24:33.591 Test: blockdev write read size > 128k ...passed 00:24:33.591 Test: blockdev write read invalid size ...passed 00:24:33.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.591 Test: blockdev write read max offset ...passed 00:24:33.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.591 Test: blockdev writev readv 8 blocks ...passed 00:24:33.591 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.591 Test: blockdev writev readv block ...passed 00:24:33.591 Test: blockdev writev readv size > 128k ...passed 00:24:33.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.591 Test: blockdev comparev and writev ...[2024-12-06 06:51:06.150194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e34000 len:0x1000 00:24:33.591 [2024-12-06 06:51:06.150260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev nvme passthru rw ...passed 00:24:33.591 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:06.150980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:33.591 [2024-12-06 06:51:06.151027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:33.591 passed 00:24:33.591 Test: blockdev nvme admin passthru ...passed 00:24:33.591 Test: blockdev copy ...passed 00:24:33.591 Suite: bdevio tests on: Nvme0n1 00:24:33.591 Test: blockdev write read block ...passed 00:24:33.591 Test: blockdev write zeroes read block ...passed 00:24:33.591 Test: blockdev write zeroes read no split ...passed 00:24:33.882 Test: blockdev write zeroes read split ...passed 00:24:33.882 Test: blockdev write zeroes read split partial ...passed 00:24:33.882 Test: blockdev reset ...[2024-12-06 06:51:06.228770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:24:33.882 [2024-12-06 06:51:06.232516] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:24:33.882 passed 00:24:33.882 Test: blockdev write read 8 blocks ...passed 00:24:33.882 Test: blockdev write read size > 128k ...passed 00:24:33.882 Test: blockdev write read invalid size ...passed 00:24:33.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:33.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:33.882 Test: blockdev write read max offset ...passed 00:24:33.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:33.882 Test: blockdev writev readv 8 blocks ...passed 00:24:33.882 Test: blockdev writev readv 30 x 1block ...passed 00:24:33.882 Test: blockdev writev readv block ...passed 00:24:33.882 Test: blockdev writev readv size > 128k ...passed 00:24:33.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:33.882 Test: blockdev comparev and writev ...[2024-12-06 06:51:06.242338] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassedince it has 00:24:33.882 separate metadata which is not supported yet. 00:24:33.882 00:24:33.882 Test: blockdev nvme passthru rw ...passed 00:24:33.882 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:06.243284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:24:33.882 [2024-12-06 06:51:06.243488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:24:33.882 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:24:33.882 passed 00:24:33.882 Test: blockdev copy ...passed 00:24:33.882 00:24:33.882 Run Summary: Type Total Ran Passed Failed Inactive 00:24:33.882 suites 6 6 n/a 0 0 00:24:33.882 tests 138 138 138 0 0 00:24:33.882 asserts 893 893 893 0 n/a 00:24:33.882 00:24:33.882 Elapsed time = 1.412 seconds 00:24:33.882 0 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61080 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61080 ']' 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61080 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61080 00:24:33.882 killing process with pid 61080 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61080' 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61080 00:24:33.882 06:51:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61080 00:24:34.818 06:51:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:34.818 00:24:34.818 real 0m2.725s 00:24:34.818 user 0m7.154s 00:24:34.818 sys 0m0.353s 00:24:34.818 06:51:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.818 06:51:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:34.818 ************************************ 00:24:34.818 END TEST bdev_bounds 00:24:34.818 ************************************ 00:24:34.818 06:51:07 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:24:34.818 06:51:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:34.818 06:51:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.818 06:51:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:34.818 ************************************ 00:24:34.818 START TEST bdev_nbd 00:24:34.818 ************************************ 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61145 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61145 /var/tmp/spdk-nbd.sock 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61145 ']' 00:24:34.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.818 06:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:34.818 [2024-12-06 06:51:07.382320] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:34.818 [2024-12-06 06:51:07.382497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.076 [2024-12-06 06:51:07.567242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.334 [2024-12-06 06:51:07.673896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:35.901 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:36.465 1+0 records in 00:24:36.465 1+0 records out 00:24:36.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544442 s, 7.5 MB/s 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:36.465 06:51:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:36.722 1+0 records in 00:24:36.722 1+0 records out 00:24:36.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536669 s, 7.6 MB/s 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:36.722 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.020 1+0 records in 00:24:37.020 1+0 records out 00:24:37.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547819 s, 7.5 MB/s 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:37.020 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.299 1+0 records in 00:24:37.299 1+0 records out 00:24:37.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688359 s, 6.0 MB/s 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:37.299 06:51:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.866 1+0 records in 00:24:37.866 1+0 records out 00:24:37.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688587 s, 5.9 MB/s 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:37.866 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:37.867 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:37.867 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.125 1+0 records in 00:24:38.125 1+0 records out 00:24:38.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733069 s, 5.6 MB/s 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:38.125 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:38.126 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:38.126 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd0", 00:24:38.385 "bdev_name": "Nvme0n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd1", 00:24:38.385 "bdev_name": "Nvme1n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd2", 00:24:38.385 "bdev_name": "Nvme2n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd3", 00:24:38.385 "bdev_name": "Nvme2n2" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd4", 00:24:38.385 "bdev_name": "Nvme2n3" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd5", 00:24:38.385 "bdev_name": "Nvme3n1" 00:24:38.385 } 00:24:38.385 ]' 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd0", 00:24:38.385 "bdev_name": "Nvme0n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd1", 00:24:38.385 "bdev_name": "Nvme1n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd2", 00:24:38.385 "bdev_name": "Nvme2n1" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd3", 00:24:38.385 "bdev_name": "Nvme2n2" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd4", 00:24:38.385 "bdev_name": "Nvme2n3" 00:24:38.385 }, 00:24:38.385 { 00:24:38.385 "nbd_device": "/dev/nbd5", 00:24:38.385 "bdev_name": "Nvme3n1" 00:24:38.385 } 00:24:38.385 ]' 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:38.385 06:51:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:38.644 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:39.212 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.470 06:51:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.730 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.989 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:40.247 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:40.247 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:40.247 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:40.248 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:40.506 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:40.506 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:40.506 06:51:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:40.506 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:24:40.764 /dev/nbd0 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.023 1+0 records in 00:24:41.023 1+0 records out 00:24:41.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071518 s, 5.7 MB/s 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:41.023 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:24:41.282 /dev/nbd1 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.282 1+0 records in 00:24:41.282 1+0 records out 00:24:41.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532134 s, 7.7 MB/s 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:41.282 06:51:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:24:41.540 /dev/nbd10 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.540 1+0 records in 00:24:41.540 1+0 records out 00:24:41.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652536 s, 6.3 MB/s 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:41.540 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:24:41.829 /dev/nbd11 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.829 1+0 records in 00:24:41.829 1+0 records out 00:24:41.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639703 s, 6.4 MB/s 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:41.829 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:24:42.089 /dev/nbd12 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.089 1+0 records in 00:24:42.089 1+0 records out 00:24:42.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755327 s, 5.4 MB/s 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:42.089 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:24:42.348 /dev/nbd13 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.608 1+0 records in 00:24:42.608 1+0 records out 00:24:42.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00250868 s, 1.6 MB/s 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:42.608 06:51:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd0", 00:24:42.866 "bdev_name": "Nvme0n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd1", 00:24:42.866 "bdev_name": "Nvme1n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd10", 00:24:42.866 "bdev_name": "Nvme2n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd11", 00:24:42.866 "bdev_name": "Nvme2n2" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd12", 00:24:42.866 "bdev_name": "Nvme2n3" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd13", 00:24:42.866 "bdev_name": "Nvme3n1" 00:24:42.866 } 00:24:42.866 ]' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd0", 00:24:42.866 "bdev_name": "Nvme0n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd1", 00:24:42.866 "bdev_name": "Nvme1n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd10", 00:24:42.866 "bdev_name": "Nvme2n1" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd11", 00:24:42.866 "bdev_name": "Nvme2n2" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd12", 00:24:42.866 "bdev_name": "Nvme2n3" 00:24:42.866 }, 00:24:42.866 { 00:24:42.866 "nbd_device": "/dev/nbd13", 00:24:42.866 "bdev_name": "Nvme3n1" 00:24:42.866 } 00:24:42.866 ]' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:42.866 /dev/nbd1 00:24:42.866 /dev/nbd10 00:24:42.866 /dev/nbd11 00:24:42.866 /dev/nbd12 00:24:42.866 /dev/nbd13' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:42.866 /dev/nbd1 00:24:42.866 /dev/nbd10 00:24:42.866 /dev/nbd11 00:24:42.866 /dev/nbd12 00:24:42.866 /dev/nbd13' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:42.866 256+0 records in 00:24:42.866 256+0 records out 00:24:42.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657572 s, 159 MB/s 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:42.866 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:43.123 256+0 records in 00:24:43.123 256+0 records out 00:24:43.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12978 s, 8.1 MB/s 00:24:43.123 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:43.123 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:43.123 256+0 records in 00:24:43.123 256+0 records out 00:24:43.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125192 s, 8.4 MB/s 00:24:43.123 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:43.123 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:24:43.381 256+0 records in 00:24:43.381 256+0 records out 00:24:43.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14268 s, 7.3 MB/s 00:24:43.381 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:43.381 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:24:43.381 256+0 records in 00:24:43.381 256+0 records out 00:24:43.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182341 s, 5.8 MB/s 00:24:43.381 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:43.381 06:51:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:24:43.639 256+0 records in 00:24:43.640 256+0 records out 00:24:43.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174438 s, 6.0 MB/s 00:24:43.640 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:43.640 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:24:43.903 256+0 records in 00:24:43.903 256+0 records out 00:24:43.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163215 s, 6.4 MB/s 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:43.903 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:44.186 06:51:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:44.444 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.013 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.271 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.530 06:51:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:45.789 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:46.048 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:46.615 malloc_lvol_verify 00:24:46.616 06:51:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:46.616 38bfbc01-bddd-478e-a078-23e4d3e4d3df 00:24:46.875 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:46.875 3714e9a9-3338-4102-acd6-ab07f2112fa4 00:24:46.875 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:47.135 /dev/nbd0 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:47.394 mke2fs 1.47.0 (5-Feb-2023) 00:24:47.394 Discarding device blocks: 0/4096 done 00:24:47.394 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:47.394 00:24:47.394 Allocating group tables: 0/1 done 00:24:47.394 Writing inode tables: 0/1 done 00:24:47.394 Creating journal (1024 blocks): done 00:24:47.394 Writing superblocks and filesystem accounting information: 0/1 done 00:24:47.394 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.394 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:47.653 06:51:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61145 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61145 ']' 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61145 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61145 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.653 killing process with pid 61145 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61145' 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61145 00:24:47.653 06:51:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61145 00:24:48.589 06:51:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:48.589 00:24:48.589 real 0m13.847s 00:24:48.589 user 0m20.252s 00:24:48.589 sys 0m4.200s 00:24:48.589 06:51:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.589 06:51:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:48.589 ************************************ 00:24:48.589 END TEST bdev_nbd 00:24:48.589 ************************************ 00:24:48.589 06:51:21 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:24:48.589 06:51:21 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:24:48.589 skipping fio tests on NVMe due to multi-ns failures. 00:24:48.589 06:51:21 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:24:48.589 06:51:21 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:48.589 06:51:21 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:48.589 06:51:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:48.589 06:51:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.589 06:51:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:48.589 ************************************ 00:24:48.589 START TEST bdev_verify 00:24:48.589 ************************************ 00:24:48.589 06:51:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:48.847 [2024-12-06 06:51:21.256949] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:48.847 [2024-12-06 06:51:21.257105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:24:48.847 [2024-12-06 06:51:21.431280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:49.106 [2024-12-06 06:51:21.538102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.106 [2024-12-06 06:51:21.538115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.673 Running I/O for 5 seconds... 00:24:52.002 19712.00 IOPS, 77.00 MiB/s [2024-12-06T06:51:25.528Z] 19840.00 IOPS, 77.50 MiB/s [2024-12-06T06:51:26.460Z] 18858.67 IOPS, 73.67 MiB/s [2024-12-06T06:51:27.395Z] 18640.00 IOPS, 72.81 MiB/s [2024-12-06T06:51:27.395Z] 18572.80 IOPS, 72.55 MiB/s 00:24:54.804 Latency(us) 00:24:54.804 [2024-12-06T06:51:27.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.804 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0xbd0bd 00:24:54.804 Nvme0n1 : 5.05 1494.04 5.84 0.00 0.00 85276.83 16801.05 95325.09 00:24:54.804 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:24:54.804 Nvme0n1 : 5.08 1562.60 6.10 0.00 0.00 81704.91 14060.45 82456.20 00:24:54.804 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0xa0000 00:24:54.804 Nvme1n1 : 5.06 1493.44 5.83 0.00 0.00 85126.40 19660.80 91035.46 00:24:54.804 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0xa0000 length 0xa0000 00:24:54.804 Nvme1n1 : 5.08 1562.14 6.10 0.00 0.00 81561.24 13941.29 75306.82 00:24:54.804 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0x80000 00:24:54.804 Nvme2n1 : 5.08 1498.76 5.85 0.00 0.00 84695.97 8757.99 93418.59 00:24:54.804 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x80000 length 0x80000 00:24:54.804 Nvme2n1 : 5.08 1561.07 6.10 0.00 0.00 81433.53 15728.64 73400.32 00:24:54.804 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0x80000 00:24:54.804 Nvme2n2 : 5.08 1498.16 5.85 0.00 0.00 84560.83 9115.46 94371.84 00:24:54.804 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x80000 length 0x80000 00:24:54.804 Nvme2n2 : 5.09 1559.90 6.09 0.00 0.00 81318.05 17873.45 77213.32 00:24:54.804 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0x80000 00:24:54.804 Nvme2n3 : 5.10 1506.16 5.88 0.00 0.00 84142.57 10068.71 94848.47 00:24:54.804 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x80000 length 0x80000 00:24:54.804 Nvme2n3 : 5.09 1559.37 6.09 0.00 0.00 81158.64 17039.36 80549.70 00:24:54.804 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x0 length 0x20000 00:24:54.804 Nvme3n1 : 5.10 1505.61 5.88 0.00 0.00 84010.57 10307.03 97231.59 00:24:54.804 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:54.804 Verification LBA range: start 0x20000 length 0x20000 00:24:54.804 Nvme3n1 : 5.09 1558.90 6.09 0.00 0.00 81026.76 10485.76 82932.83 00:24:54.804 [2024-12-06T06:51:27.395Z] =================================================================================================================== 00:24:54.804 [2024-12-06T06:51:27.395Z] Total : 18360.16 71.72 0.00 0.00 82966.19 8757.99 97231.59 00:24:56.176 00:24:56.176 real 0m7.525s 00:24:56.176 user 0m13.968s 00:24:56.176 sys 0m0.238s 00:24:56.176 06:51:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.176 ************************************ 00:24:56.176 END TEST bdev_verify 00:24:56.176 ************************************ 00:24:56.176 06:51:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:56.176 06:51:28 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:56.176 06:51:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:56.176 06:51:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.176 06:51:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:56.176 ************************************ 00:24:56.176 START TEST bdev_verify_big_io 00:24:56.176 ************************************ 00:24:56.176 06:51:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:56.434 [2024-12-06 06:51:28.835078] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:24:56.434 [2024-12-06 06:51:28.835258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61661 ] 00:24:56.434 [2024-12-06 06:51:29.011826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.692 [2024-12-06 06:51:29.116228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.692 [2024-12-06 06:51:29.116240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.624 Running I/O for 5 seconds... 00:25:01.424 272.00 IOPS, 17.00 MiB/s [2024-12-06T06:51:35.917Z] 1093.50 IOPS, 68.34 MiB/s [2024-12-06T06:51:36.175Z] 1912.33 IOPS, 119.52 MiB/s 00:25:03.584 Latency(us) 00:25:03.584 [2024-12-06T06:51:36.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.584 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0xbd0b 00:25:03.584 Nvme0n1 : 5.62 113.83 7.11 0.00 0.00 1081362.06 16443.58 1029510.98 00:25:03.584 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:03.584 Nvme0n1 : 5.77 121.94 7.62 0.00 0.00 1012358.81 17754.30 1121023.07 00:25:03.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0xa000 00:25:03.584 Nvme1n1 : 5.71 117.13 7.32 0.00 0.00 1026423.25 85792.58 934185.89 00:25:03.584 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0xa000 length 0xa000 00:25:03.584 Nvme1n1 : 5.88 118.28 7.39 0.00 0.00 1012602.93 60531.43 1616713.54 00:25:03.584 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0x8000 00:25:03.584 Nvme2n1 : 5.78 121.71 7.61 0.00 0.00 964279.98 68634.07 918933.88 00:25:03.584 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x8000 length 0x8000 00:25:03.584 Nvme2n1 : 5.89 117.91 7.37 0.00 0.00 978398.91 60769.75 1647217.57 00:25:03.584 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0x8000 00:25:03.584 Nvme2n2 : 5.79 121.66 7.60 0.00 0.00 934180.47 69587.32 949437.91 00:25:03.584 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x8000 length 0x8000 00:25:03.584 Nvme2n2 : 5.89 122.27 7.64 0.00 0.00 916232.83 47662.55 1685347.61 00:25:03.584 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0x8000 00:25:03.584 Nvme2n3 : 5.88 129.59 8.10 0.00 0.00 854440.57 34793.66 1243039.19 00:25:03.584 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x8000 length 0x8000 00:25:03.584 Nvme2n3 : 5.94 132.31 8.27 0.00 0.00 816972.37 10962.39 1715851.64 00:25:03.584 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x0 length 0x2000 00:25:03.584 Nvme3n1 : 5.89 140.46 8.78 0.00 0.00 767159.73 2949.12 1258291.20 00:25:03.584 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:03.584 Verification LBA range: start 0x2000 length 0x2000 00:25:03.584 Nvme3n1 : 6.02 167.25 10.45 0.00 0.00 632516.67 543.65 1738729.66 00:25:03.584 [2024-12-06T06:51:36.175Z] =================================================================================================================== 00:25:03.584 [2024-12-06T06:51:36.175Z] Total : 1524.34 95.27 0.00 0.00 901639.43 543.65 1738729.66 00:25:05.488 00:25:05.488 real 0m8.872s 00:25:05.488 user 0m16.625s 00:25:05.488 sys 0m0.262s 00:25:05.488 06:51:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.488 06:51:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.488 ************************************ 00:25:05.488 END TEST bdev_verify_big_io 00:25:05.488 ************************************ 00:25:05.488 06:51:37 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:05.488 06:51:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:05.488 06:51:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.488 06:51:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:05.488 ************************************ 00:25:05.488 START TEST bdev_write_zeroes 00:25:05.488 ************************************ 00:25:05.488 06:51:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:05.488 [2024-12-06 06:51:37.775287] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:05.488 [2024-12-06 06:51:37.775487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61780 ] 00:25:05.488 [2024-12-06 06:51:37.961281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.488 [2024-12-06 06:51:38.071772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.423 Running I/O for 1 seconds... 00:25:07.358 52224.00 IOPS, 204.00 MiB/s 00:25:07.358 Latency(us) 00:25:07.358 [2024-12-06T06:51:39.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.358 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.358 Nvme0n1 : 1.03 8612.97 33.64 0.00 0.00 14814.51 5570.56 30384.87 00:25:07.358 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.358 Nvme1n1 : 1.04 8595.14 33.57 0.00 0.00 14820.43 11439.01 30146.56 00:25:07.358 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.359 Nvme2n1 : 1.04 8577.71 33.51 0.00 0.00 14788.22 11677.32 29550.78 00:25:07.359 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.359 Nvme2n2 : 1.04 8561.86 33.44 0.00 0.00 14790.26 11677.32 29074.15 00:25:07.359 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.359 Nvme2n3 : 1.04 8551.60 33.40 0.00 0.00 14730.15 9770.82 29193.31 00:25:07.359 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:07.359 Nvme3n1 : 1.04 8541.26 33.36 0.00 0.00 14704.38 8043.05 30980.65 00:25:07.359 [2024-12-06T06:51:39.950Z] =================================================================================================================== 00:25:07.359 [2024-12-06T06:51:39.950Z] Total : 51440.54 200.94 0.00 0.00 14774.66 5570.56 30980.65 00:25:08.295 00:25:08.295 real 0m3.141s 00:25:08.295 user 0m2.774s 00:25:08.295 sys 0m0.240s 00:25:08.295 06:51:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.295 06:51:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:08.295 ************************************ 00:25:08.295 END TEST bdev_write_zeroes 00:25:08.295 ************************************ 00:25:08.295 06:51:40 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:08.295 06:51:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:08.295 06:51:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.295 06:51:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:08.295 ************************************ 00:25:08.295 START TEST bdev_json_nonenclosed 00:25:08.295 ************************************ 00:25:08.295 06:51:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:08.554 [2024-12-06 06:51:40.941132] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:08.554 [2024-12-06 06:51:40.941299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:25:08.554 [2024-12-06 06:51:41.133962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.814 [2024-12-06 06:51:41.249994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.814 [2024-12-06 06:51:41.250103] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:08.814 [2024-12-06 06:51:41.250132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:08.814 [2024-12-06 06:51:41.250146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:09.073 00:25:09.073 real 0m0.653s 00:25:09.073 user 0m0.422s 00:25:09.073 sys 0m0.125s 00:25:09.073 06:51:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.073 ************************************ 00:25:09.073 06:51:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:09.073 END TEST bdev_json_nonenclosed 00:25:09.073 ************************************ 00:25:09.073 06:51:41 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:09.073 06:51:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:09.073 06:51:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.073 06:51:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:09.073 ************************************ 00:25:09.073 START TEST bdev_json_nonarray 00:25:09.073 ************************************ 00:25:09.073 06:51:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:09.073 [2024-12-06 06:51:41.658484] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:09.073 [2024-12-06 06:51:41.658657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:25:09.332 [2024-12-06 06:51:41.843354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.591 [2024-12-06 06:51:41.969298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.591 [2024-12-06 06:51:41.969435] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:09.591 [2024-12-06 06:51:41.969469] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:09.591 [2024-12-06 06:51:41.969486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:09.851 00:25:09.851 real 0m0.683s 00:25:09.851 user 0m0.454s 00:25:09.851 sys 0m0.124s 00:25:09.851 06:51:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.851 06:51:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:09.851 ************************************ 00:25:09.851 END TEST bdev_json_nonarray 00:25:09.851 ************************************ 00:25:09.851 06:51:42 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:25:09.851 06:51:42 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:25:09.852 06:51:42 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:25:09.852 ************************************ 00:25:09.852 END TEST blockdev_nvme 00:25:09.852 ************************************ 00:25:09.852 00:25:09.852 real 0m43.902s 00:25:09.852 user 1m7.618s 00:25:09.852 sys 0m6.630s 00:25:09.852 06:51:42 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.852 06:51:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:09.852 06:51:42 -- spdk/autotest.sh@209 -- # uname -s 00:25:09.852 06:51:42 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:25:09.852 06:51:42 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:09.852 06:51:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.852 06:51:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.852 06:51:42 -- common/autotest_common.sh@10 -- # set +x 00:25:09.852 ************************************ 00:25:09.852 START TEST blockdev_nvme_gpt 00:25:09.852 ************************************ 00:25:09.852 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:09.852 * Looking for test storage... 00:25:09.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:09.852 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:09.852 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:25:09.852 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.112 06:51:42 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.112 --rc genhtml_branch_coverage=1 00:25:10.112 --rc genhtml_function_coverage=1 00:25:10.112 --rc genhtml_legend=1 00:25:10.112 --rc geninfo_all_blocks=1 00:25:10.112 --rc geninfo_unexecuted_blocks=1 00:25:10.112 00:25:10.112 ' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.112 --rc genhtml_branch_coverage=1 00:25:10.112 --rc genhtml_function_coverage=1 00:25:10.112 --rc genhtml_legend=1 00:25:10.112 --rc geninfo_all_blocks=1 00:25:10.112 --rc geninfo_unexecuted_blocks=1 00:25:10.112 00:25:10.112 ' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.112 --rc genhtml_branch_coverage=1 00:25:10.112 --rc genhtml_function_coverage=1 00:25:10.112 --rc genhtml_legend=1 00:25:10.112 --rc geninfo_all_blocks=1 00:25:10.112 --rc geninfo_unexecuted_blocks=1 00:25:10.112 00:25:10.112 ' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.112 --rc genhtml_branch_coverage=1 00:25:10.112 --rc genhtml_function_coverage=1 00:25:10.112 --rc genhtml_legend=1 00:25:10.112 --rc geninfo_all_blocks=1 00:25:10.112 --rc geninfo_unexecuted_blocks=1 00:25:10.112 00:25:10.112 ' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:25:10.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:25:10.112 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61938 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61938 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61938 ']' 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.113 06:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.113 06:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:10.113 [2024-12-06 06:51:42.651660] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:10.113 [2024-12-06 06:51:42.652287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:25:10.372 [2024-12-06 06:51:42.834835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.372 [2024-12-06 06:51:42.939112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.311 06:51:43 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.311 06:51:43 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:25:11.311 06:51:43 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:25:11.311 06:51:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:25:11.311 06:51:43 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:11.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:11.828 Waiting for block devices as requested 00:25:11.828 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.828 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:12.086 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:12.086 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.384 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:25:17.384 BYT; 00:25:17.384 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:25:17.384 BYT; 00:25:17.384 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:25:17.384 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:17.384 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:17.385 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:17.385 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:17.385 06:51:49 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:17.385 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:17.385 06:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:25:18.321 The operation has completed successfully. 00:25:18.321 06:51:50 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:25:19.256 The operation has completed successfully. 00:25:19.256 06:51:51 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:19.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:20.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.514 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.514 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:25:20.514 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.514 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:20.514 [] 00:25:20.514 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.514 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:25:20.514 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:25:20.514 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:25:20.514 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:20.774 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:25:20.774 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.774 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:25:21.034 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:21.034 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.035 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:25:21.035 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:25:21.035 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2ef288ef-4312-46fe-b6b0-2e006da5d672"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2ef288ef-4312-46fe-b6b0-2e006da5d672",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4b405199-1779-45ab-bfa3-211259eb8301"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b405199-1779-45ab-bfa3-211259eb8301",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2fe09bdc-ca4a-4c0b-9d00-fe8c915680d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2fe09bdc-ca4a-4c0b-9d00-fe8c915680d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1d5ee11f-28e1-4269-a0e0-d29762e53643"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d5ee11f-28e1-4269-a0e0-d29762e53643",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dc6df946-3b7c-449b-b54c-48b0628ad07a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dc6df946-3b7c-449b-b54c-48b0628ad07a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:25:21.295 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:25:21.295 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:25:21.295 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:25:21.295 06:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61938 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61938 ']' 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61938 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61938 00:25:21.295 killing process with pid 61938 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61938' 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61938 00:25:21.295 06:51:53 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61938 00:25:23.199 06:51:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:23.199 06:51:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:23.199 06:51:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:23.199 06:51:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.199 06:51:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:23.199 ************************************ 00:25:23.199 START TEST bdev_hello_world 00:25:23.199 ************************************ 00:25:23.199 06:51:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:23.457 [2024-12-06 06:51:55.853001] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:23.457 [2024-12-06 06:51:55.853186] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62572 ] 00:25:23.457 [2024-12-06 06:51:56.035851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.716 [2024-12-06 06:51:56.136507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.283 [2024-12-06 06:51:56.757948] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:24.283 [2024-12-06 06:51:56.758013] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:25:24.283 [2024-12-06 06:51:56.758048] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:24.283 [2024-12-06 06:51:56.761079] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:24.284 [2024-12-06 06:51:56.761604] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:24.284 [2024-12-06 06:51:56.761644] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:24.284 [2024-12-06 06:51:56.761855] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:24.284 00:25:24.284 [2024-12-06 06:51:56.761891] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:25.220 00:25:25.220 real 0m2.002s 00:25:25.220 user 0m1.680s 00:25:25.220 sys 0m0.213s 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:25.220 ************************************ 00:25:25.220 END TEST bdev_hello_world 00:25:25.220 ************************************ 00:25:25.220 06:51:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:25:25.220 06:51:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.220 06:51:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.220 06:51:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:25.220 ************************************ 00:25:25.220 START TEST bdev_bounds 00:25:25.220 ************************************ 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62610 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:25.220 Process bdevio pid: 62610 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62610' 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62610 00:25:25.220 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62610 ']' 00:25:25.479 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.479 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.479 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.479 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.479 06:51:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:25.479 [2024-12-06 06:51:57.908935] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:25.479 [2024-12-06 06:51:57.909106] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62610 ] 00:25:25.737 [2024-12-06 06:51:58.093265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:25.737 [2024-12-06 06:51:58.221974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.737 [2024-12-06 06:51:58.222098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.737 [2024-12-06 06:51:58.222098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.305 06:51:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.305 06:51:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:25:26.305 06:51:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:26.656 I/O targets: 00:25:26.656 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:25:26.656 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:25:26.656 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:25:26.656 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:26.656 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:26.656 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:26.656 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:25:26.656 00:25:26.656 00:25:26.656 CUnit - A unit testing framework for C - Version 2.1-3 00:25:26.656 http://cunit.sourceforge.net/ 00:25:26.656 00:25:26.656 00:25:26.656 Suite: bdevio tests on: Nvme3n1 00:25:26.656 Test: blockdev write read block ...passed 00:25:26.656 Test: blockdev write zeroes read block ...passed 00:25:26.656 Test: blockdev write zeroes read no split ...passed 00:25:26.656 Test: blockdev write zeroes read split ...passed 00:25:26.656 Test: blockdev write zeroes read split partial ...passed 00:25:26.656 Test: blockdev reset ...[2024-12-06 06:51:59.067061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:25:26.656 [2024-12-06 06:51:59.070833] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:25:26.656 passed 00:25:26.656 Test: blockdev write read 8 blocks ...passed 00:25:26.656 Test: blockdev write read size > 128k ...passed 00:25:26.656 Test: blockdev write read invalid size ...passed 00:25:26.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.656 Test: blockdev write read max offset ...passed 00:25:26.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.656 Test: blockdev writev readv 8 blocks ...passed 00:25:26.656 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.656 Test: blockdev writev readv block ...passed 00:25:26.656 Test: blockdev writev readv size > 128k ...passed 00:25:26.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.656 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.077443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3604000 len:0x1000 00:25:26.656 [2024-12-06 06:51:59.077502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.656 passed 00:25:26.656 Test: blockdev nvme passthru rw ...passed 00:25:26.656 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:59.078202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:26.656 [2024-12-06 06:51:59.078249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:26.656 passed 00:25:26.656 Test: blockdev nvme admin passthru ...passed 00:25:26.656 Test: blockdev copy ...passed 00:25:26.656 Suite: bdevio tests on: Nvme2n3 00:25:26.656 Test: blockdev write read block ...passed 00:25:26.656 Test: blockdev write zeroes read block ...passed 00:25:26.656 Test: blockdev write zeroes read no split ...passed 00:25:26.656 Test: blockdev write zeroes read split ...passed 00:25:26.656 Test: blockdev write zeroes read split partial ...passed 00:25:26.656 Test: blockdev reset ...[2024-12-06 06:51:59.145926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:25:26.656 [2024-12-06 06:51:59.149943] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:25:26.656 passed 00:25:26.656 Test: blockdev write read 8 blocks ...passed 00:25:26.656 Test: blockdev write read size > 128k ...passed 00:25:26.656 Test: blockdev write read invalid size ...passed 00:25:26.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.656 Test: blockdev write read max offset ...passed 00:25:26.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.656 Test: blockdev writev readv 8 blocks ...passed 00:25:26.656 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.656 Test: blockdev writev readv block ...passed 00:25:26.656 Test: blockdev writev readv size > 128k ...passed 00:25:26.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.656 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.157329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3602000 len:0x1000 00:25:26.656 [2024-12-06 06:51:59.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.656 passed 00:25:26.656 Test: blockdev nvme passthru rw ...passed 00:25:26.656 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:59.158197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:26.656 [2024-12-06 06:51:59.158241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:26.656 passed 00:25:26.656 Test: blockdev nvme admin passthru ...passed 00:25:26.656 Test: blockdev copy ...passed 00:25:26.656 Suite: bdevio tests on: Nvme2n2 00:25:26.656 Test: blockdev write read block ...passed 00:25:26.656 Test: blockdev write zeroes read block ...passed 00:25:26.656 Test: blockdev write zeroes read no split ...passed 00:25:26.656 Test: blockdev write zeroes read split ...passed 00:25:26.656 Test: blockdev write zeroes read split partial ...passed 00:25:26.656 Test: blockdev reset ...[2024-12-06 06:51:59.223610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:25:26.656 [2024-12-06 06:51:59.227855] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:25:26.656 passed 00:25:26.656 Test: blockdev write read 8 blocks ...passed 00:25:26.656 Test: blockdev write read size > 128k ...passed 00:25:26.656 Test: blockdev write read invalid size ...passed 00:25:26.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.656 Test: blockdev write read max offset ...passed 00:25:26.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.656 Test: blockdev writev readv 8 blocks ...passed 00:25:26.657 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.657 Test: blockdev writev readv block ...passed 00:25:26.657 Test: blockdev writev readv size > 128k ...passed 00:25:26.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.657 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.235375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7438000 len:0x1000 00:25:26.657 [2024-12-06 06:51:59.235433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.657 passed 00:25:26.657 Test: blockdev nvme passthru rw ...passed 00:25:26.657 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:59.236223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:26.657 [2024-12-06 06:51:59.236264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:26.657 passed 00:25:26.657 Test: blockdev nvme admin passthru ...passed 00:25:26.657 Test: blockdev copy ...passed 00:25:26.657 Suite: bdevio tests on: Nvme2n1 00:25:26.657 Test: blockdev write read block ...passed 00:25:26.915 Test: blockdev write zeroes read block ...passed 00:25:26.915 Test: blockdev write zeroes read no split ...passed 00:25:26.915 Test: blockdev write zeroes read split ...passed 00:25:26.915 Test: blockdev write zeroes read split partial ...passed 00:25:26.915 Test: blockdev reset ...[2024-12-06 06:51:59.301274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:25:26.915 [2024-12-06 06:51:59.305368] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:25:26.915 passed 00:25:26.915 Test: blockdev write read 8 blocks ...passed 00:25:26.915 Test: blockdev write read size > 128k ...passed 00:25:26.915 Test: blockdev write read invalid size ...passed 00:25:26.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.915 Test: blockdev write read max offset ...passed 00:25:26.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.915 Test: blockdev writev readv 8 blocks ...passed 00:25:26.915 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.915 Test: blockdev writev readv block ...passed 00:25:26.915 Test: blockdev writev readv size > 128k ...passed 00:25:26.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.915 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.312583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7434000 len:0x1000 00:25:26.915 [2024-12-06 06:51:59.312643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.915 passed 00:25:26.915 Test: blockdev nvme passthru rw ...passed 00:25:26.915 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:51:59.313463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:26.915 passed 00:25:26.915 Test: blockdev nvme admin passthru ...[2024-12-06 06:51:59.313505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:26.915 passed 00:25:26.915 Test: blockdev copy ...passed 00:25:26.915 Suite: bdevio tests on: Nvme1n1p2 00:25:26.915 Test: blockdev write read block ...passed 00:25:26.915 Test: blockdev write zeroes read block ...passed 00:25:26.915 Test: blockdev write zeroes read no split ...passed 00:25:26.915 Test: blockdev write zeroes read split ...passed 00:25:26.915 Test: blockdev write zeroes read split partial ...passed 00:25:26.915 Test: blockdev reset ...[2024-12-06 06:51:59.380295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:25:26.915 [2024-12-06 06:51:59.384015] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:25:26.915 passed 00:25:26.916 Test: blockdev write read 8 blocks ...passed 00:25:26.916 Test: blockdev write read size > 128k ...passed 00:25:26.916 Test: blockdev write read invalid size ...passed 00:25:26.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.916 Test: blockdev write read max offset ...passed 00:25:26.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.916 Test: blockdev writev readv 8 blocks ...passed 00:25:26.916 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.916 Test: blockdev writev readv block ...passed 00:25:26.916 Test: blockdev writev readv size > 128k ...passed 00:25:26.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.916 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.394324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d7430000 len:0x1000 00:25:26.916 [2024-12-06 06:51:59.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.916 passed 00:25:26.916 Test: blockdev nvme passthru rw ...passed 00:25:26.916 Test: blockdev nvme passthru vendor specific ...passed 00:25:26.916 Test: blockdev nvme admin passthru ...passed 00:25:26.916 Test: blockdev copy ...passed 00:25:26.916 Suite: bdevio tests on: Nvme1n1p1 00:25:26.916 Test: blockdev write read block ...passed 00:25:26.916 Test: blockdev write zeroes read block ...passed 00:25:26.916 Test: blockdev write zeroes read no split ...passed 00:25:26.916 Test: blockdev write zeroes read split ...passed 00:25:26.916 Test: blockdev write zeroes read split partial ...passed 00:25:26.916 Test: blockdev reset ...[2024-12-06 06:51:59.449262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:25:26.916 [2024-12-06 06:51:59.452865] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:25:26.916 passed 00:25:26.916 Test: blockdev write read 8 blocks ...passed 00:25:26.916 Test: blockdev write read size > 128k ...passed 00:25:26.916 Test: blockdev write read invalid size ...passed 00:25:26.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:26.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:26.916 Test: blockdev write read max offset ...passed 00:25:26.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:26.916 Test: blockdev writev readv 8 blocks ...passed 00:25:26.916 Test: blockdev writev readv 30 x 1block ...passed 00:25:26.916 Test: blockdev writev readv block ...passed 00:25:26.916 Test: blockdev writev readv size > 128k ...passed 00:25:26.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:26.916 Test: blockdev comparev and writev ...[2024-12-06 06:51:59.460697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c380e000 len:0x1000 00:25:26.916 [2024-12-06 06:51:59.460763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:26.916 passed 00:25:26.916 Test: blockdev nvme passthru rw ...passed 00:25:26.916 Test: blockdev nvme passthru vendor specific ...passed 00:25:26.916 Test: blockdev nvme admin passthru ...passed 00:25:26.916 Test: blockdev copy ...passed 00:25:26.916 Suite: bdevio tests on: Nvme0n1 00:25:26.916 Test: blockdev write read block ...passed 00:25:26.916 Test: blockdev write zeroes read block ...passed 00:25:26.916 Test: blockdev write zeroes read no split ...passed 00:25:26.916 Test: blockdev write zeroes read split ...passed 00:25:27.175 Test: blockdev write zeroes read split partial ...passed 00:25:27.175 Test: blockdev reset ...[2024-12-06 06:51:59.516065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:25:27.175 [2024-12-06 06:51:59.519663] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:25:27.175 passed 00:25:27.175 Test: blockdev write read 8 blocks ...passed 00:25:27.175 Test: blockdev write read size > 128k ...passed 00:25:27.175 Test: blockdev write read invalid size ...passed 00:25:27.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:27.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:27.175 Test: blockdev write read max offset ...passed 00:25:27.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:27.175 Test: blockdev writev readv 8 blocks ...passed 00:25:27.175 Test: blockdev writev readv 30 x 1block ...passed 00:25:27.175 Test: blockdev writev readv block ...passed 00:25:27.175 Test: blockdev writev readv size > 128k ...passed 00:25:27.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:27.175 Test: blockdev comparev and writev ...passed 00:25:27.175 Test: blockdev nvme passthru rw ...[2024-12-06 06:51:59.526184] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:25:27.175 separate metadata which is not supported yet. 00:25:27.175 passed 00:25:27.175 Test: blockdev nvme passthru vendor specific ...passed 00:25:27.175 Test: blockdev nvme admin passthru ...[2024-12-06 06:51:59.526734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:25:27.175 [2024-12-06 06:51:59.526787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:25:27.175 passed 00:25:27.175 Test: blockdev copy ...passed 00:25:27.175 00:25:27.175 Run Summary: Type Total Ran Passed Failed Inactive 00:25:27.175 suites 7 7 n/a 0 0 00:25:27.175 tests 161 161 161 0 0 00:25:27.175 asserts 1025 1025 1025 0 n/a 00:25:27.175 00:25:27.175 Elapsed time = 1.415 seconds 00:25:27.175 0 00:25:27.175 06:51:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62610 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62610 ']' 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62610 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62610 00:25:27.176 killing process with pid 62610 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62610' 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62610 00:25:27.176 06:51:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62610 00:25:28.113 ************************************ 00:25:28.113 END TEST bdev_bounds 00:25:28.113 ************************************ 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:25:28.113 00:25:28.113 real 0m2.761s 00:25:28.113 user 0m7.147s 00:25:28.113 sys 0m0.344s 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:28.113 06:52:00 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:25:28.113 06:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:28.113 06:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.113 06:52:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:28.113 ************************************ 00:25:28.113 START TEST bdev_nbd 00:25:28.113 ************************************ 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:25:28.113 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62675 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62675 /var/tmp/spdk-nbd.sock 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62675 ']' 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.114 06:52:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:28.373 [2024-12-06 06:52:00.713277] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:28.373 [2024-12-06 06:52:00.713435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.373 [2024-12-06 06:52:00.881667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.631 [2024-12-06 06:52:00.985192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:29.199 06:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.457 1+0 records in 00:25:29.457 1+0 records out 00:25:29.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501639 s, 8.2 MB/s 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:29.457 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.458 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:29.458 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:29.458 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:29.458 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:29.458 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:25:30.024 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:30.025 1+0 records in 00:25:30.025 1+0 records out 00:25:30.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554634 s, 7.4 MB/s 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:30.025 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:25:30.281 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:25:30.281 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:30.282 1+0 records in 00:25:30.282 1+0 records out 00:25:30.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678825 s, 6.0 MB/s 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:30.282 06:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:30.540 1+0 records in 00:25:30.540 1+0 records out 00:25:30.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548792 s, 7.5 MB/s 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:30.540 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:30.798 1+0 records in 00:25:30.798 1+0 records out 00:25:30.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493558 s, 8.3 MB/s 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.798 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:30.799 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:31.366 1+0 records in 00:25:31.366 1+0 records out 00:25:31.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704478 s, 5.8 MB/s 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:31.366 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:31.367 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:31.625 1+0 records in 00:25:31.625 1+0 records out 00:25:31.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873211 s, 4.7 MB/s 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:25:31.625 06:52:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:31.883 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd0", 00:25:31.883 "bdev_name": "Nvme0n1" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd1", 00:25:31.883 "bdev_name": "Nvme1n1p1" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd2", 00:25:31.883 "bdev_name": "Nvme1n1p2" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd3", 00:25:31.883 "bdev_name": "Nvme2n1" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd4", 00:25:31.883 "bdev_name": "Nvme2n2" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd5", 00:25:31.883 "bdev_name": "Nvme2n3" 00:25:31.883 }, 00:25:31.883 { 00:25:31.883 "nbd_device": "/dev/nbd6", 00:25:31.883 "bdev_name": "Nvme3n1" 00:25:31.883 } 00:25:31.883 ]' 00:25:31.883 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:31.883 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd0", 00:25:31.884 "bdev_name": "Nvme0n1" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd1", 00:25:31.884 "bdev_name": "Nvme1n1p1" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd2", 00:25:31.884 "bdev_name": "Nvme1n1p2" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd3", 00:25:31.884 "bdev_name": "Nvme2n1" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd4", 00:25:31.884 "bdev_name": "Nvme2n2" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd5", 00:25:31.884 "bdev_name": "Nvme2n3" 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "nbd_device": "/dev/nbd6", 00:25:31.884 "bdev_name": "Nvme3n1" 00:25:31.884 } 00:25:31.884 ]' 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:31.884 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:32.142 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:32.400 06:52:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:32.973 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.230 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.492 06:52:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.761 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:34.019 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:34.277 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:34.278 06:52:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:25:34.536 /dev/nbd0 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:34.536 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:34.536 1+0 records in 00:25:34.536 1+0 records out 00:25:34.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541724 s, 7.6 MB/s 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:34.797 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:25:35.064 /dev/nbd1 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.064 1+0 records in 00:25:35.064 1+0 records out 00:25:35.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586387 s, 7.0 MB/s 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:35.064 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:25:35.323 /dev/nbd10 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.323 1+0 records in 00:25:35.323 1+0 records out 00:25:35.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569363 s, 7.2 MB/s 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:35.323 06:52:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:25:35.581 /dev/nbd11 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.581 1+0 records in 00:25:35.581 1+0 records out 00:25:35.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694357 s, 5.9 MB/s 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:35.581 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:25:35.839 /dev/nbd12 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:35.839 1+0 records in 00:25:35.839 1+0 records out 00:25:35.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623785 s, 6.6 MB/s 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:35.839 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:25:36.406 /dev/nbd13 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:36.406 1+0 records in 00:25:36.406 1+0 records out 00:25:36.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818017 s, 5.0 MB/s 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:36.406 06:52:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:25:36.665 /dev/nbd14 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:36.665 1+0 records in 00:25:36.665 1+0 records out 00:25:36.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873138 s, 4.7 MB/s 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:36.665 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:36.924 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd0", 00:25:36.924 "bdev_name": "Nvme0n1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd1", 00:25:36.924 "bdev_name": "Nvme1n1p1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd10", 00:25:36.924 "bdev_name": "Nvme1n1p2" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd11", 00:25:36.924 "bdev_name": "Nvme2n1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd12", 00:25:36.924 "bdev_name": "Nvme2n2" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd13", 00:25:36.924 "bdev_name": "Nvme2n3" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd14", 00:25:36.924 "bdev_name": "Nvme3n1" 00:25:36.924 } 00:25:36.924 ]' 00:25:36.924 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd0", 00:25:36.924 "bdev_name": "Nvme0n1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd1", 00:25:36.924 "bdev_name": "Nvme1n1p1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd10", 00:25:36.924 "bdev_name": "Nvme1n1p2" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd11", 00:25:36.924 "bdev_name": "Nvme2n1" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd12", 00:25:36.924 "bdev_name": "Nvme2n2" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd13", 00:25:36.924 "bdev_name": "Nvme2n3" 00:25:36.924 }, 00:25:36.924 { 00:25:36.924 "nbd_device": "/dev/nbd14", 00:25:36.924 "bdev_name": "Nvme3n1" 00:25:36.924 } 00:25:36.924 ]' 00:25:36.924 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:37.184 /dev/nbd1 00:25:37.184 /dev/nbd10 00:25:37.184 /dev/nbd11 00:25:37.184 /dev/nbd12 00:25:37.184 /dev/nbd13 00:25:37.184 /dev/nbd14' 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:37.184 /dev/nbd1 00:25:37.184 /dev/nbd10 00:25:37.184 /dev/nbd11 00:25:37.184 /dev/nbd12 00:25:37.184 /dev/nbd13 00:25:37.184 /dev/nbd14' 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:37.184 256+0 records in 00:25:37.184 256+0 records out 00:25:37.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763426 s, 137 MB/s 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:37.184 256+0 records in 00:25:37.184 256+0 records out 00:25:37.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150659 s, 7.0 MB/s 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.184 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:37.442 256+0 records in 00:25:37.442 256+0 records out 00:25:37.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153739 s, 6.8 MB/s 00:25:37.442 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.442 06:52:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:37.442 256+0 records in 00:25:37.442 256+0 records out 00:25:37.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170196 s, 6.2 MB/s 00:25:37.442 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.442 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:37.700 256+0 records in 00:25:37.700 256+0 records out 00:25:37.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172017 s, 6.1 MB/s 00:25:37.700 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.700 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:25:37.958 256+0 records in 00:25:37.958 256+0 records out 00:25:37.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158071 s, 6.6 MB/s 00:25:37.958 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.958 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:25:37.958 256+0 records in 00:25:37.958 256+0 records out 00:25:37.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129214 s, 8.1 MB/s 00:25:37.958 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:37.958 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:25:38.217 256+0 records in 00:25:38.217 256+0 records out 00:25:38.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14456 s, 7.3 MB/s 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.217 06:52:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.476 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.735 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:25:39.301 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.302 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:25:39.560 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:39.561 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.561 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.561 06:52:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:25:39.819 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.820 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:40.078 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:40.336 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:40.337 06:52:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:25:40.600 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:40.859 malloc_lvol_verify 00:25:41.118 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:41.118 f9db6fce-d568-43b6-8add-9e6b5d83ddb1 00:25:41.377 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:41.377 c736deb5-d03b-4c80-8606-15582359fe72 00:25:41.635 06:52:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:41.894 /dev/nbd0 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:25:41.894 mke2fs 1.47.0 (5-Feb-2023) 00:25:41.894 Discarding device blocks: 0/4096 done 00:25:41.894 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:41.894 00:25:41.894 Allocating group tables: 0/1 done 00:25:41.894 Writing inode tables: 0/1 done 00:25:41.894 Creating journal (1024 blocks): done 00:25:41.894 Writing superblocks and filesystem accounting information: 0/1 done 00:25:41.894 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:41.894 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62675 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62675 ']' 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62675 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62675 00:25:42.153 killing process with pid 62675 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62675' 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62675 00:25:42.153 06:52:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62675 00:25:43.544 ************************************ 00:25:43.544 END TEST bdev_nbd 00:25:43.544 ************************************ 00:25:43.544 06:52:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:25:43.544 00:25:43.544 real 0m15.090s 00:25:43.544 user 0m21.938s 00:25:43.544 sys 0m4.704s 00:25:43.544 06:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.544 06:52:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:25:43.544 skipping fio tests on NVMe due to multi-ns failures. 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:43.544 06:52:15 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:43.544 06:52:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:43.544 06:52:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.544 06:52:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:43.544 ************************************ 00:25:43.544 START TEST bdev_verify 00:25:43.544 ************************************ 00:25:43.544 06:52:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:43.544 [2024-12-06 06:52:15.861484] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:43.544 [2024-12-06 06:52:15.861654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63131 ] 00:25:43.545 [2024-12-06 06:52:16.046011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:43.802 [2024-12-06 06:52:16.171596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.802 [2024-12-06 06:52:16.171603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.367 Running I/O for 5 seconds... 00:25:46.678 17856.00 IOPS, 69.75 MiB/s [2024-12-06T06:52:20.205Z] 18336.00 IOPS, 71.62 MiB/s [2024-12-06T06:52:21.141Z] 18581.33 IOPS, 72.58 MiB/s [2024-12-06T06:52:22.077Z] 18480.00 IOPS, 72.19 MiB/s [2024-12-06T06:52:22.077Z] 19161.60 IOPS, 74.85 MiB/s 00:25:49.486 Latency(us) 00:25:49.486 [2024-12-06T06:52:22.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.486 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x0 length 0xbd0bd 00:25:49.486 Nvme0n1 : 5.06 1365.60 5.33 0.00 0.00 93390.50 21328.99 89605.59 00:25:49.486 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:49.486 Nvme0n1 : 5.08 1334.90 5.21 0.00 0.00 95676.60 16920.20 95325.09 00:25:49.486 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x0 length 0x4ff80 00:25:49.486 Nvme1n1p1 : 5.07 1363.42 5.33 0.00 0.00 93336.20 21448.15 88652.33 00:25:49.486 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x4ff80 length 0x4ff80 00:25:49.486 Nvme1n1p1 : 5.08 1334.44 5.21 0.00 0.00 95576.16 16920.20 90558.84 00:25:49.486 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x0 length 0x4ff7f 00:25:49.486 Nvme1n1p2 : 5.07 1362.62 5.32 0.00 0.00 93152.65 19779.96 86269.21 00:25:49.486 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:25:49.486 Nvme1n1p2 : 5.09 1334.00 5.21 0.00 0.00 95282.81 17158.52 84839.33 00:25:49.486 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x0 length 0x80000 00:25:49.486 Nvme2n1 : 5.07 1362.03 5.32 0.00 0.00 92982.76 19899.11 87222.46 00:25:49.486 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.486 Verification LBA range: start 0x80000 length 0x80000 00:25:49.486 Nvme2n1 : 5.09 1333.60 5.21 0.00 0.00 95063.84 17277.67 87222.46 00:25:49.486 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x0 length 0x80000 00:25:49.487 Nvme2n2 : 5.09 1369.59 5.35 0.00 0.00 92414.00 4587.52 88652.33 00:25:49.487 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x80000 length 0x80000 00:25:49.487 Nvme2n2 : 5.09 1333.17 5.21 0.00 0.00 94907.01 17396.83 90082.21 00:25:49.487 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x0 length 0x80000 00:25:49.487 Nvme2n3 : 5.11 1378.05 5.38 0.00 0.00 91770.34 11856.06 90558.84 00:25:49.487 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x80000 length 0x80000 00:25:49.487 Nvme2n3 : 5.09 1332.74 5.21 0.00 0.00 94726.13 17754.30 91512.09 00:25:49.487 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x0 length 0x20000 00:25:49.487 Nvme3n1 : 5.11 1377.57 5.38 0.00 0.00 91587.66 11617.75 91512.09 00:25:49.487 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:49.487 Verification LBA range: start 0x20000 length 0x20000 00:25:49.487 Nvme3n1 : 5.09 1332.34 5.20 0.00 0.00 94542.24 14596.65 94848.47 00:25:49.487 [2024-12-06T06:52:22.078Z] =================================================================================================================== 00:25:49.487 [2024-12-06T06:52:22.078Z] Total : 18914.09 73.88 0.00 0.00 93868.09 4587.52 95325.09 00:25:50.864 00:25:50.864 real 0m7.614s 00:25:50.864 user 0m14.078s 00:25:50.864 sys 0m0.253s 00:25:50.864 06:52:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.864 06:52:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:50.864 ************************************ 00:25:50.864 END TEST bdev_verify 00:25:50.864 ************************************ 00:25:50.864 06:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:50.864 06:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:50.864 06:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.864 06:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:50.864 ************************************ 00:25:50.864 START TEST bdev_verify_big_io 00:25:50.864 ************************************ 00:25:50.864 06:52:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:51.122 [2024-12-06 06:52:23.508627] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:25:51.122 [2024-12-06 06:52:23.509384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63234 ] 00:25:51.122 [2024-12-06 06:52:23.682356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:51.380 [2024-12-06 06:52:23.802210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.380 [2024-12-06 06:52:23.802220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.314 Running I/O for 5 seconds... 00:25:58.128 528.00 IOPS, 33.00 MiB/s [2024-12-06T06:52:31.056Z] 2317.50 IOPS, 144.84 MiB/s [2024-12-06T06:52:31.056Z] 2950.67 IOPS, 184.42 MiB/s 00:25:58.465 Latency(us) 00:25:58.465 [2024-12-06T06:52:31.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.466 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0xbd0b 00:25:58.466 Nvme0n1 : 5.85 94.77 5.92 0.00 0.00 1270160.38 24307.90 1563331.49 00:25:58.466 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:58.466 Nvme0n1 : 5.65 107.52 6.72 0.00 0.00 1141209.82 14239.19 1288795.23 00:25:58.466 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x4ff8 00:25:58.466 Nvme1n1p1 : 5.85 92.99 5.81 0.00 0.00 1266599.29 90558.84 1601461.53 00:25:58.466 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x4ff8 length 0x4ff8 00:25:58.466 Nvme1n1p1 : 5.81 100.67 6.29 0.00 0.00 1172695.79 98184.84 1769233.69 00:25:58.466 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x4ff7 00:25:58.466 Nvme1n1p2 : 6.15 70.30 4.39 0.00 0.00 1620779.20 131548.63 2272550.17 00:25:58.466 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x4ff7 length 0x4ff7 00:25:58.466 Nvme1n1p2 : 5.90 105.17 6.57 0.00 0.00 1092909.07 89605.59 1799737.72 00:25:58.466 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x8000 00:25:58.466 Nvme2n1 : 5.93 112.31 7.02 0.00 0.00 1002104.30 76260.07 1060015.01 00:25:58.466 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x8000 length 0x8000 00:25:58.466 Nvme2n1 : 5.98 109.37 6.84 0.00 0.00 1021562.49 71017.19 1830241.75 00:25:58.466 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x8000 00:25:58.466 Nvme2n2 : 5.99 117.59 7.35 0.00 0.00 930965.79 53143.74 1090519.04 00:25:58.466 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x8000 length 0x8000 00:25:58.466 Nvme2n2 : 6.12 113.76 7.11 0.00 0.00 949200.96 103427.72 1868371.78 00:25:58.466 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x8000 00:25:58.466 Nvme2n3 : 6.10 125.90 7.87 0.00 0.00 842568.92 62914.56 1128649.08 00:25:58.466 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x8000 length 0x8000 00:25:58.466 Nvme2n3 : 6.18 122.04 7.63 0.00 0.00 860472.03 36461.85 1891249.80 00:25:58.466 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x0 length 0x2000 00:25:58.466 Nvme3n1 : 6.17 140.61 8.79 0.00 0.00 734763.10 6762.12 1166779.11 00:25:58.466 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:58.466 Verification LBA range: start 0x2000 length 0x2000 00:25:58.466 Nvme3n1 : 6.19 136.70 8.54 0.00 0.00 748294.11 1266.04 1914127.83 00:25:58.466 [2024-12-06T06:52:31.057Z] =================================================================================================================== 00:25:58.466 [2024-12-06T06:52:31.057Z] Total : 1549.70 96.86 0.00 0.00 1007511.92 1266.04 2272550.17 00:26:00.366 00:26:00.366 real 0m9.026s 00:26:00.366 user 0m16.908s 00:26:00.366 sys 0m0.271s 00:26:00.366 06:52:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.366 06:52:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.366 ************************************ 00:26:00.366 END TEST bdev_verify_big_io 00:26:00.366 ************************************ 00:26:00.366 06:52:32 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:00.366 06:52:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:00.366 06:52:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.366 06:52:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:00.366 ************************************ 00:26:00.366 START TEST bdev_write_zeroes 00:26:00.366 ************************************ 00:26:00.366 06:52:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:00.366 [2024-12-06 06:52:32.593481] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:26:00.366 [2024-12-06 06:52:32.593641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63350 ] 00:26:00.366 [2024-12-06 06:52:32.765913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.366 [2024-12-06 06:52:32.869936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.302 Running I/O for 1 seconds... 00:26:02.236 48384.00 IOPS, 189.00 MiB/s 00:26:02.236 Latency(us) 00:26:02.236 [2024-12-06T06:52:34.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.236 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme0n1 : 1.03 6917.88 27.02 0.00 0.00 18459.16 14179.61 31695.59 00:26:02.236 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme1n1p1 : 1.03 6908.91 26.99 0.00 0.00 18451.06 14120.03 30742.34 00:26:02.236 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme1n1p2 : 1.03 6899.81 26.95 0.00 0.00 18408.74 14298.76 29789.09 00:26:02.236 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme2n1 : 1.03 6891.53 26.92 0.00 0.00 18343.75 12392.26 28716.68 00:26:02.236 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme2n2 : 1.03 6883.29 26.89 0.00 0.00 18313.52 10485.76 28120.90 00:26:02.236 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme2n3 : 1.03 6875.08 26.86 0.00 0.00 18289.95 10247.45 30027.40 00:26:02.236 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:02.236 Nvme3n1 : 1.03 6866.78 26.82 0.00 0.00 18279.14 10009.13 31933.91 00:26:02.236 [2024-12-06T06:52:34.827Z] =================================================================================================================== 00:26:02.236 [2024-12-06T06:52:34.827Z] Total : 48243.28 188.45 0.00 0.00 18363.62 10009.13 31933.91 00:26:03.169 00:26:03.169 real 0m3.102s 00:26:03.169 user 0m2.790s 00:26:03.169 sys 0m0.189s 00:26:03.169 06:52:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.169 06:52:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:03.169 ************************************ 00:26:03.169 END TEST bdev_write_zeroes 00:26:03.169 ************************************ 00:26:03.169 06:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:03.170 06:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:03.170 06:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.170 06:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:03.170 ************************************ 00:26:03.170 START TEST bdev_json_nonenclosed 00:26:03.170 ************************************ 00:26:03.170 06:52:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:03.428 [2024-12-06 06:52:35.761182] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:26:03.428 [2024-12-06 06:52:35.761356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63403 ] 00:26:03.428 [2024-12-06 06:52:35.945174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.686 [2024-12-06 06:52:36.069462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.686 [2024-12-06 06:52:36.069585] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:03.686 [2024-12-06 06:52:36.069617] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:03.686 [2024-12-06 06:52:36.069635] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:03.945 00:26:03.945 real 0m0.692s 00:26:03.945 user 0m0.467s 00:26:03.945 sys 0m0.118s 00:26:03.945 06:52:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.945 06:52:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:03.945 ************************************ 00:26:03.945 END TEST bdev_json_nonenclosed 00:26:03.945 ************************************ 00:26:03.945 06:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:03.945 06:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:03.945 06:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.945 06:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:03.945 ************************************ 00:26:03.945 START TEST bdev_json_nonarray 00:26:03.945 ************************************ 00:26:03.945 06:52:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:03.945 [2024-12-06 06:52:36.493144] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:26:03.945 [2024-12-06 06:52:36.493312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63434 ] 00:26:04.203 [2024-12-06 06:52:36.675315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.203 [2024-12-06 06:52:36.779026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.203 [2024-12-06 06:52:36.779143] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:04.203 [2024-12-06 06:52:36.779173] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:04.203 [2024-12-06 06:52:36.779188] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:04.461 00:26:04.461 real 0m0.633s 00:26:04.461 user 0m0.401s 00:26:04.461 sys 0m0.126s 00:26:04.461 06:52:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.461 ************************************ 00:26:04.461 06:52:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:04.461 END TEST bdev_json_nonarray 00:26:04.461 ************************************ 00:26:04.720 06:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:26:04.720 06:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:26:04.720 06:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:26:04.720 06:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:04.720 06:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.720 06:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:04.720 ************************************ 00:26:04.720 START TEST bdev_gpt_uuid 00:26:04.720 ************************************ 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63460 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63460 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63460 ']' 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:04.720 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.721 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.721 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.721 06:52:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:04.721 [2024-12-06 06:52:37.192943] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:26:04.721 [2024-12-06 06:52:37.193094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63460 ] 00:26:04.979 [2024-12-06 06:52:37.366817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.979 [2024-12-06 06:52:37.478593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.915 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.915 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:26:05.915 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:05.915 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.915 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:06.173 Some configs were skipped because the RPC state that can call them passed over. 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.173 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:26:06.174 { 00:26:06.174 "name": "Nvme1n1p1", 00:26:06.174 "aliases": [ 00:26:06.174 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:26:06.174 ], 00:26:06.174 "product_name": "GPT Disk", 00:26:06.174 "block_size": 4096, 00:26:06.174 "num_blocks": 655104, 00:26:06.174 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:06.174 "assigned_rate_limits": { 00:26:06.174 "rw_ios_per_sec": 0, 00:26:06.174 "rw_mbytes_per_sec": 0, 00:26:06.174 "r_mbytes_per_sec": 0, 00:26:06.174 "w_mbytes_per_sec": 0 00:26:06.174 }, 00:26:06.174 "claimed": false, 00:26:06.174 "zoned": false, 00:26:06.174 "supported_io_types": { 00:26:06.174 "read": true, 00:26:06.174 "write": true, 00:26:06.174 "unmap": true, 00:26:06.174 "flush": true, 00:26:06.174 "reset": true, 00:26:06.174 "nvme_admin": false, 00:26:06.174 "nvme_io": false, 00:26:06.174 "nvme_io_md": false, 00:26:06.174 "write_zeroes": true, 00:26:06.174 "zcopy": false, 00:26:06.174 "get_zone_info": false, 00:26:06.174 "zone_management": false, 00:26:06.174 "zone_append": false, 00:26:06.174 "compare": true, 00:26:06.174 "compare_and_write": false, 00:26:06.174 "abort": true, 00:26:06.174 "seek_hole": false, 00:26:06.174 "seek_data": false, 00:26:06.174 "copy": true, 00:26:06.174 "nvme_iov_md": false 00:26:06.174 }, 00:26:06.174 "driver_specific": { 00:26:06.174 "gpt": { 00:26:06.174 "base_bdev": "Nvme1n1", 00:26:06.174 "offset_blocks": 256, 00:26:06.174 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:26:06.174 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:06.174 "partition_name": "SPDK_TEST_first" 00:26:06.174 } 00:26:06.174 } 00:26:06.174 } 00:26:06.174 ]' 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:06.174 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:26:06.433 { 00:26:06.433 "name": "Nvme1n1p2", 00:26:06.433 "aliases": [ 00:26:06.433 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:26:06.433 ], 00:26:06.433 "product_name": "GPT Disk", 00:26:06.433 "block_size": 4096, 00:26:06.433 "num_blocks": 655103, 00:26:06.433 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:06.433 "assigned_rate_limits": { 00:26:06.433 "rw_ios_per_sec": 0, 00:26:06.433 "rw_mbytes_per_sec": 0, 00:26:06.433 "r_mbytes_per_sec": 0, 00:26:06.433 "w_mbytes_per_sec": 0 00:26:06.433 }, 00:26:06.433 "claimed": false, 00:26:06.433 "zoned": false, 00:26:06.433 "supported_io_types": { 00:26:06.433 "read": true, 00:26:06.433 "write": true, 00:26:06.433 "unmap": true, 00:26:06.433 "flush": true, 00:26:06.433 "reset": true, 00:26:06.433 "nvme_admin": false, 00:26:06.433 "nvme_io": false, 00:26:06.433 "nvme_io_md": false, 00:26:06.433 "write_zeroes": true, 00:26:06.433 "zcopy": false, 00:26:06.433 "get_zone_info": false, 00:26:06.433 "zone_management": false, 00:26:06.433 "zone_append": false, 00:26:06.433 "compare": true, 00:26:06.433 "compare_and_write": false, 00:26:06.433 "abort": true, 00:26:06.433 "seek_hole": false, 00:26:06.433 "seek_data": false, 00:26:06.433 "copy": true, 00:26:06.433 "nvme_iov_md": false 00:26:06.433 }, 00:26:06.433 "driver_specific": { 00:26:06.433 "gpt": { 00:26:06.433 "base_bdev": "Nvme1n1", 00:26:06.433 "offset_blocks": 655360, 00:26:06.433 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:26:06.433 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:06.433 "partition_name": "SPDK_TEST_second" 00:26:06.433 } 00:26:06.433 } 00:26:06.433 } 00:26:06.433 ]' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63460 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63460 ']' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63460 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.433 06:52:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63460 00:26:06.433 06:52:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.433 06:52:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.433 killing process with pid 63460 00:26:06.433 06:52:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63460' 00:26:06.433 06:52:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63460 00:26:06.433 06:52:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63460 00:26:09.005 00:26:09.005 real 0m4.069s 00:26:09.005 user 0m4.396s 00:26:09.005 sys 0m0.475s 00:26:09.005 06:52:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.005 06:52:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 ************************************ 00:26:09.005 END TEST bdev_gpt_uuid 00:26:09.005 ************************************ 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:26:09.005 06:52:41 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:09.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:09.264 Waiting for block devices as requested 00:26:09.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:09.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:09.523 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:09.523 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:14.783 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:14.783 06:52:47 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:26:14.783 06:52:47 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:26:15.042 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:26:15.042 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:26:15.042 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:26:15.042 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:26:15.042 06:52:47 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:26:15.042 00:26:15.042 real 1m5.031s 00:26:15.042 user 1m24.699s 00:26:15.042 sys 0m9.767s 00:26:15.042 06:52:47 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.042 ************************************ 00:26:15.042 END TEST blockdev_nvme_gpt 00:26:15.042 ************************************ 00:26:15.042 06:52:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:15.042 06:52:47 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:15.042 06:52:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:15.042 06:52:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.042 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:26:15.042 ************************************ 00:26:15.042 START TEST nvme 00:26:15.042 ************************************ 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:15.042 * Looking for test storage... 00:26:15.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.042 06:52:47 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.042 06:52:47 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.042 06:52:47 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.042 06:52:47 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.042 06:52:47 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.042 06:52:47 nvme -- scripts/common.sh@344 -- # case "$op" in 00:26:15.042 06:52:47 nvme -- scripts/common.sh@345 -- # : 1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.042 06:52:47 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.042 06:52:47 nvme -- scripts/common.sh@365 -- # decimal 1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@353 -- # local d=1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.042 06:52:47 nvme -- scripts/common.sh@355 -- # echo 1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.042 06:52:47 nvme -- scripts/common.sh@366 -- # decimal 2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@353 -- # local d=2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.042 06:52:47 nvme -- scripts/common.sh@355 -- # echo 2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.042 06:52:47 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.042 06:52:47 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.042 06:52:47 nvme -- scripts/common.sh@368 -- # return 0 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.042 --rc genhtml_branch_coverage=1 00:26:15.042 --rc genhtml_function_coverage=1 00:26:15.042 --rc genhtml_legend=1 00:26:15.042 --rc geninfo_all_blocks=1 00:26:15.042 --rc geninfo_unexecuted_blocks=1 00:26:15.042 00:26:15.042 ' 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.042 --rc genhtml_branch_coverage=1 00:26:15.042 --rc genhtml_function_coverage=1 00:26:15.042 --rc genhtml_legend=1 00:26:15.042 --rc geninfo_all_blocks=1 00:26:15.042 --rc geninfo_unexecuted_blocks=1 00:26:15.042 00:26:15.042 ' 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.042 --rc genhtml_branch_coverage=1 00:26:15.042 --rc genhtml_function_coverage=1 00:26:15.042 --rc genhtml_legend=1 00:26:15.042 --rc geninfo_all_blocks=1 00:26:15.042 --rc geninfo_unexecuted_blocks=1 00:26:15.042 00:26:15.042 ' 00:26:15.042 06:52:47 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.042 --rc genhtml_branch_coverage=1 00:26:15.042 --rc genhtml_function_coverage=1 00:26:15.042 --rc genhtml_legend=1 00:26:15.042 --rc geninfo_all_blocks=1 00:26:15.042 --rc geninfo_unexecuted_blocks=1 00:26:15.042 00:26:15.042 ' 00:26:15.042 06:52:47 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:15.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.173 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.173 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.173 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.173 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.431 06:52:48 nvme -- nvme/nvme.sh@79 -- # uname 00:26:16.431 06:52:48 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:26:16.431 06:52:48 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:26:16.431 06:52:48 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1075 -- # stubpid=64110 00:26:16.431 Waiting for stub to ready for secondary processes... 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64110 ]] 00:26:16.431 06:52:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:26:16.431 [2024-12-06 06:52:48.859891] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:26:16.431 [2024-12-06 06:52:48.860074] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:26:17.408 [2024-12-06 06:52:49.707846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:17.408 06:52:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:17.408 06:52:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64110 ]] 00:26:17.408 06:52:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:26:17.408 [2024-12-06 06:52:49.835983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.408 [2024-12-06 06:52:49.836106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.408 [2024-12-06 06:52:49.836122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.408 [2024-12-06 06:52:49.854720] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:26:17.408 [2024-12-06 06:52:49.854773] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:17.408 [2024-12-06 06:52:49.867419] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:26:17.408 [2024-12-06 06:52:49.867792] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:26:17.408 [2024-12-06 06:52:49.870854] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:17.408 [2024-12-06 06:52:49.871140] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:26:17.408 [2024-12-06 06:52:49.871250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:26:17.408 [2024-12-06 06:52:49.874378] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:17.408 [2024-12-06 06:52:49.874628] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:26:17.408 [2024-12-06 06:52:49.874758] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:26:17.408 [2024-12-06 06:52:49.878018] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:17.408 [2024-12-06 06:52:49.878374] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:26:17.408 [2024-12-06 06:52:49.879268] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:26:17.408 [2024-12-06 06:52:49.879396] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:26:17.408 [2024-12-06 06:52:49.879481] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:26:18.344 06:52:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:18.344 done. 00:26:18.344 06:52:50 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:26:18.344 06:52:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:18.344 06:52:50 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:26:18.344 06:52:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.344 06:52:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:18.344 ************************************ 00:26:18.344 START TEST nvme_reset 00:26:18.344 ************************************ 00:26:18.344 06:52:50 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:18.602 Initializing NVMe Controllers 00:26:18.602 Skipping QEMU NVMe SSD at 0000:00:10.0 00:26:18.602 Skipping QEMU NVMe SSD at 0000:00:11.0 00:26:18.602 Skipping QEMU NVMe SSD at 0000:00:13.0 00:26:18.602 Skipping QEMU NVMe SSD at 0000:00:12.0 00:26:18.602 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:26:18.602 00:26:18.602 real 0m0.329s 00:26:18.602 user 0m0.132s 00:26:18.602 sys 0m0.151s 00:26:18.602 06:52:51 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.602 06:52:51 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:26:18.602 ************************************ 00:26:18.602 END TEST nvme_reset 00:26:18.602 ************************************ 00:26:18.602 06:52:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:26:18.602 06:52:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:18.602 06:52:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.602 06:52:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:18.860 ************************************ 00:26:18.860 START TEST nvme_identify 00:26:18.860 ************************************ 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:26:18.860 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:26:18.860 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:26:18.860 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:26:18.860 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:18.860 06:52:51 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:18.860 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:26:19.122 [2024-12-06 06:52:51.533854] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64144 terminated unexpected 00:26:19.122 ===================================================== 00:26:19.122 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:19.122 ===================================================== 00:26:19.122 Controller Capabilities/Features 00:26:19.122 ================================ 00:26:19.122 Vendor ID: 1b36 00:26:19.122 Subsystem Vendor ID: 1af4 00:26:19.122 Serial Number: 12340 00:26:19.122 Model Number: QEMU NVMe Ctrl 00:26:19.122 Firmware Version: 8.0.0 00:26:19.122 Recommended Arb Burst: 6 00:26:19.122 IEEE OUI Identifier: 00 54 52 00:26:19.122 Multi-path I/O 00:26:19.122 May have multiple subsystem ports: No 00:26:19.122 May have multiple controllers: No 00:26:19.122 Associated with SR-IOV VF: No 00:26:19.122 Max Data Transfer Size: 524288 00:26:19.122 Max Number of Namespaces: 256 00:26:19.122 Max Number of I/O Queues: 64 00:26:19.122 NVMe Specification Version (VS): 1.4 00:26:19.122 NVMe Specification Version (Identify): 1.4 00:26:19.122 Maximum Queue Entries: 2048 00:26:19.122 Contiguous Queues Required: Yes 00:26:19.122 Arbitration Mechanisms Supported 00:26:19.122 Weighted Round Robin: Not Supported 00:26:19.122 Vendor Specific: Not Supported 00:26:19.122 Reset Timeout: 7500 ms 00:26:19.122 Doorbell Stride: 4 bytes 00:26:19.122 NVM Subsystem Reset: Not Supported 00:26:19.122 Command Sets Supported 00:26:19.122 NVM Command Set: Supported 00:26:19.122 Boot Partition: Not Supported 00:26:19.122 Memory Page Size Minimum: 4096 bytes 00:26:19.122 Memory Page Size Maximum: 65536 bytes 00:26:19.122 Persistent Memory Region: Not Supported 00:26:19.122 Optional Asynchronous Events Supported 00:26:19.122 Namespace Attribute Notices: Supported 00:26:19.122 Firmware Activation Notices: Not Supported 00:26:19.122 ANA Change Notices: Not Supported 00:26:19.122 PLE Aggregate Log Change Notices: Not Supported 00:26:19.122 LBA Status Info Alert Notices: Not Supported 00:26:19.122 EGE Aggregate Log Change Notices: Not Supported 00:26:19.122 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.122 Zone Descriptor Change Notices: Not Supported 00:26:19.123 Discovery Log Change Notices: Not Supported 00:26:19.123 Controller Attributes 00:26:19.123 128-bit Host Identifier: Not Supported 00:26:19.123 Non-Operational Permissive Mode: Not Supported 00:26:19.123 NVM Sets: Not Supported 00:26:19.123 Read Recovery Levels: Not Supported 00:26:19.123 Endurance Groups: Not Supported 00:26:19.123 Predictable Latency Mode: Not Supported 00:26:19.123 Traffic Based Keep ALive: Not Supported 00:26:19.123 Namespace Granularity: Not Supported 00:26:19.123 SQ Associations: Not Supported 00:26:19.123 UUID List: Not Supported 00:26:19.123 Multi-Domain Subsystem: Not Supported 00:26:19.123 Fixed Capacity Management: Not Supported 00:26:19.123 Variable Capacity Management: Not Supported 00:26:19.123 Delete Endurance Group: Not Supported 00:26:19.123 Delete NVM Set: Not Supported 00:26:19.123 Extended LBA Formats Supported: Supported 00:26:19.123 Flexible Data Placement Supported: Not Supported 00:26:19.123 00:26:19.123 Controller Memory Buffer Support 00:26:19.123 ================================ 00:26:19.123 Supported: No 00:26:19.123 00:26:19.123 Persistent Memory Region Support 00:26:19.123 ================================ 00:26:19.123 Supported: No 00:26:19.123 00:26:19.123 Admin Command Set Attributes 00:26:19.123 ============================ 00:26:19.123 Security Send/Receive: Not Supported 00:26:19.123 Format NVM: Supported 00:26:19.123 Firmware Activate/Download: Not Supported 00:26:19.123 Namespace Management: Supported 00:26:19.123 Device Self-Test: Not Supported 00:26:19.123 Directives: Supported 00:26:19.123 NVMe-MI: Not Supported 00:26:19.123 Virtualization Management: Not Supported 00:26:19.123 Doorbell Buffer Config: Supported 00:26:19.123 Get LBA Status Capability: Not Supported 00:26:19.123 Command & Feature Lockdown Capability: Not Supported 00:26:19.123 Abort Command Limit: 4 00:26:19.123 Async Event Request Limit: 4 00:26:19.123 Number of Firmware Slots: N/A 00:26:19.123 Firmware Slot 1 Read-Only: N/A 00:26:19.123 Firmware Activation Without Reset: N/A 00:26:19.123 Multiple Update Detection Support: N/A 00:26:19.123 Firmware Update Granularity: No Information Provided 00:26:19.123 Per-Namespace SMART Log: Yes 00:26:19.123 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.123 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:19.123 Command Effects Log Page: Supported 00:26:19.123 Get Log Page Extended Data: Supported 00:26:19.123 Telemetry Log Pages: Not Supported 00:26:19.123 Persistent Event Log Pages: Not Supported 00:26:19.123 Supported Log Pages Log Page: May Support 00:26:19.123 Commands Supported & Effects Log Page: Not Supported 00:26:19.123 Feature Identifiers & Effects Log Page:May Support 00:26:19.123 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.123 Data Area 4 for Telemetry Log: Not Supported 00:26:19.123 Error Log Page Entries Supported: 1 00:26:19.123 Keep Alive: Not Supported 00:26:19.123 00:26:19.123 NVM Command Set Attributes 00:26:19.123 ========================== 00:26:19.123 Submission Queue Entry Size 00:26:19.123 Max: 64 00:26:19.123 Min: 64 00:26:19.123 Completion Queue Entry Size 00:26:19.123 Max: 16 00:26:19.123 Min: 16 00:26:19.123 Number of Namespaces: 256 00:26:19.123 Compare Command: Supported 00:26:19.123 Write Uncorrectable Command: Not Supported 00:26:19.123 Dataset Management Command: Supported 00:26:19.123 Write Zeroes Command: Supported 00:26:19.123 Set Features Save Field: Supported 00:26:19.123 Reservations: Not Supported 00:26:19.123 Timestamp: Supported 00:26:19.123 Copy: Supported 00:26:19.123 Volatile Write Cache: Present 00:26:19.123 Atomic Write Unit (Normal): 1 00:26:19.123 Atomic Write Unit (PFail): 1 00:26:19.123 Atomic Compare & Write Unit: 1 00:26:19.123 Fused Compare & Write: Not Supported 00:26:19.123 Scatter-Gather List 00:26:19.123 SGL Command Set: Supported 00:26:19.123 SGL Keyed: Not Supported 00:26:19.123 SGL Bit Bucket Descriptor: Not Supported 00:26:19.123 SGL Metadata Pointer: Not Supported 00:26:19.123 Oversized SGL: Not Supported 00:26:19.123 SGL Metadata Address: Not Supported 00:26:19.123 SGL Offset: Not Supported 00:26:19.123 Transport SGL Data Block: Not Supported 00:26:19.123 Replay Protected Memory Block: Not Supported 00:26:19.123 00:26:19.123 Firmware Slot Information 00:26:19.123 ========================= 00:26:19.123 Active slot: 1 00:26:19.123 Slot 1 Firmware Revision: 1.0 00:26:19.123 00:26:19.123 00:26:19.123 Commands Supported and Effects 00:26:19.123 ============================== 00:26:19.123 Admin Commands 00:26:19.123 -------------- 00:26:19.123 Delete I/O Submission Queue (00h): Supported 00:26:19.123 Create I/O Submission Queue (01h): Supported 00:26:19.123 Get Log Page (02h): Supported 00:26:19.123 Delete I/O Completion Queue (04h): Supported 00:26:19.123 Create I/O Completion Queue (05h): Supported 00:26:19.123 Identify (06h): Supported 00:26:19.123 Abort (08h): Supported 00:26:19.123 Set Features (09h): Supported 00:26:19.123 Get Features (0Ah): Supported 00:26:19.123 Asynchronous Event Request (0Ch): Supported 00:26:19.123 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.123 Directive Send (19h): Supported 00:26:19.123 Directive Receive (1Ah): Supported 00:26:19.123 Virtualization Management (1Ch): Supported 00:26:19.123 Doorbell Buffer Config (7Ch): Supported 00:26:19.123 Format NVM (80h): Supported LBA-Change 00:26:19.123 I/O Commands 00:26:19.123 ------------ 00:26:19.123 Flush (00h): Supported LBA-Change 00:26:19.123 Write (01h): Supported LBA-Change 00:26:19.123 Read (02h): Supported 00:26:19.123 Compare (05h): Supported 00:26:19.123 Write Zeroes (08h): Supported LBA-Change 00:26:19.123 Dataset Management (09h): Supported LBA-Change 00:26:19.123 Unknown (0Ch): Supported 00:26:19.123 Unknown (12h): Supported 00:26:19.123 Copy (19h): Supported LBA-Change 00:26:19.123 Unknown (1Dh): Supported LBA-Change 00:26:19.123 00:26:19.123 Error Log 00:26:19.123 ========= 00:26:19.123 00:26:19.123 Arbitration 00:26:19.123 =========== 00:26:19.123 Arbitration Burst: no limit 00:26:19.123 00:26:19.123 Power Management 00:26:19.123 ================ 00:26:19.123 Number of Power States: 1 00:26:19.123 Current Power State: Power State #0 00:26:19.123 Power State #0: 00:26:19.123 Max Power: 25.00 W 00:26:19.123 Non-Operational State: Operational 00:26:19.123 Entry Latency: 16 microseconds 00:26:19.123 Exit Latency: 4 microseconds 00:26:19.123 Relative Read Throughput: 0 00:26:19.123 Relative Read Latency: 0 00:26:19.123 Relative Write Throughput: 0 00:26:19.123 Relative Write Latency: 0 00:26:19.123 Idle Power[2024-12-06 06:52:51.535421] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64144 terminated unexpected 00:26:19.123 : Not Reported 00:26:19.123 Active Power: Not Reported 00:26:19.123 Non-Operational Permissive Mode: Not Supported 00:26:19.123 00:26:19.123 Health Information 00:26:19.123 ================== 00:26:19.123 Critical Warnings: 00:26:19.123 Available Spare Space: OK 00:26:19.123 Temperature: OK 00:26:19.123 Device Reliability: OK 00:26:19.123 Read Only: No 00:26:19.123 Volatile Memory Backup: OK 00:26:19.123 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.123 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.123 Available Spare: 0% 00:26:19.123 Available Spare Threshold: 0% 00:26:19.123 Life Percentage Used: 0% 00:26:19.123 Data Units Read: 635 00:26:19.123 Data Units Written: 563 00:26:19.123 Host Read Commands: 32701 00:26:19.123 Host Write Commands: 32487 00:26:19.123 Controller Busy Time: 0 minutes 00:26:19.123 Power Cycles: 0 00:26:19.123 Power On Hours: 0 hours 00:26:19.123 Unsafe Shutdowns: 0 00:26:19.123 Unrecoverable Media Errors: 0 00:26:19.123 Lifetime Error Log Entries: 0 00:26:19.123 Warning Temperature Time: 0 minutes 00:26:19.123 Critical Temperature Time: 0 minutes 00:26:19.123 00:26:19.123 Number of Queues 00:26:19.123 ================ 00:26:19.123 Number of I/O Submission Queues: 64 00:26:19.123 Number of I/O Completion Queues: 64 00:26:19.123 00:26:19.123 ZNS Specific Controller Data 00:26:19.124 ============================ 00:26:19.124 Zone Append Size Limit: 0 00:26:19.124 00:26:19.124 00:26:19.124 Active Namespaces 00:26:19.124 ================= 00:26:19.124 Namespace ID:1 00:26:19.124 Error Recovery Timeout: Unlimited 00:26:19.124 Command Set Identifier: NVM (00h) 00:26:19.124 Deallocate: Supported 00:26:19.124 Deallocated/Unwritten Error: Supported 00:26:19.124 Deallocated Read Value: All 0x00 00:26:19.124 Deallocate in Write Zeroes: Not Supported 00:26:19.124 Deallocated Guard Field: 0xFFFF 00:26:19.124 Flush: Supported 00:26:19.124 Reservation: Not Supported 00:26:19.124 Metadata Transferred as: Separate Metadata Buffer 00:26:19.124 Namespace Sharing Capabilities: Private 00:26:19.124 Size (in LBAs): 1548666 (5GiB) 00:26:19.124 Capacity (in LBAs): 1548666 (5GiB) 00:26:19.124 Utilization (in LBAs): 1548666 (5GiB) 00:26:19.124 Thin Provisioning: Not Supported 00:26:19.124 Per-NS Atomic Units: No 00:26:19.124 Maximum Single Source Range Length: 128 00:26:19.124 Maximum Copy Length: 128 00:26:19.124 Maximum Source Range Count: 128 00:26:19.124 NGUID/EUI64 Never Reused: No 00:26:19.124 Namespace Write Protected: No 00:26:19.124 Number of LBA Formats: 8 00:26:19.124 Current LBA Format: LBA Format #07 00:26:19.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.124 00:26:19.124 NVM Specific Namespace Data 00:26:19.124 =========================== 00:26:19.124 Logical Block Storage Tag Mask: 0 00:26:19.124 Protection Information Capabilities: 00:26:19.124 16b Guard Protection Information Storage Tag Support: No 00:26:19.124 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.124 Storage Tag Check Read Support: No 00:26:19.124 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.124 ===================================================== 00:26:19.124 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:19.124 ===================================================== 00:26:19.124 Controller Capabilities/Features 00:26:19.124 ================================ 00:26:19.124 Vendor ID: 1b36 00:26:19.124 Subsystem Vendor ID: 1af4 00:26:19.124 Serial Number: 12341 00:26:19.124 Model Number: QEMU NVMe Ctrl 00:26:19.124 Firmware Version: 8.0.0 00:26:19.124 Recommended Arb Burst: 6 00:26:19.124 IEEE OUI Identifier: 00 54 52 00:26:19.124 Multi-path I/O 00:26:19.124 May have multiple subsystem ports: No 00:26:19.124 May have multiple controllers: No 00:26:19.124 Associated with SR-IOV VF: No 00:26:19.124 Max Data Transfer Size: 524288 00:26:19.124 Max Number of Namespaces: 256 00:26:19.124 Max Number of I/O Queues: 64 00:26:19.124 NVMe Specification Version (VS): 1.4 00:26:19.124 NVMe Specification Version (Identify): 1.4 00:26:19.124 Maximum Queue Entries: 2048 00:26:19.124 Contiguous Queues Required: Yes 00:26:19.124 Arbitration Mechanisms Supported 00:26:19.124 Weighted Round Robin: Not Supported 00:26:19.124 Vendor Specific: Not Supported 00:26:19.124 Reset Timeout: 7500 ms 00:26:19.124 Doorbell Stride: 4 bytes 00:26:19.124 NVM Subsystem Reset: Not Supported 00:26:19.124 Command Sets Supported 00:26:19.124 NVM Command Set: Supported 00:26:19.124 Boot Partition: Not Supported 00:26:19.124 Memory Page Size Minimum: 4096 bytes 00:26:19.124 Memory Page Size Maximum: 65536 bytes 00:26:19.124 Persistent Memory Region: Not Supported 00:26:19.124 Optional Asynchronous Events Supported 00:26:19.124 Namespace Attribute Notices: Supported 00:26:19.124 Firmware Activation Notices: Not Supported 00:26:19.124 ANA Change Notices: Not Supported 00:26:19.124 PLE Aggregate Log Change Notices: Not Supported 00:26:19.124 LBA Status Info Alert Notices: Not Supported 00:26:19.124 EGE Aggregate Log Change Notices: Not Supported 00:26:19.124 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.124 Zone Descriptor Change Notices: Not Supported 00:26:19.124 Discovery Log Change Notices: Not Supported 00:26:19.124 Controller Attributes 00:26:19.124 128-bit Host Identifier: Not Supported 00:26:19.124 Non-Operational Permissive Mode: Not Supported 00:26:19.124 NVM Sets: Not Supported 00:26:19.124 Read Recovery Levels: Not Supported 00:26:19.124 Endurance Groups: Not Supported 00:26:19.124 Predictable Latency Mode: Not Supported 00:26:19.124 Traffic Based Keep ALive: Not Supported 00:26:19.124 Namespace Granularity: Not Supported 00:26:19.124 SQ Associations: Not Supported 00:26:19.124 UUID List: Not Supported 00:26:19.124 Multi-Domain Subsystem: Not Supported 00:26:19.124 Fixed Capacity Management: Not Supported 00:26:19.124 Variable Capacity Management: Not Supported 00:26:19.124 Delete Endurance Group: Not Supported 00:26:19.124 Delete NVM Set: Not Supported 00:26:19.124 Extended LBA Formats Supported: Supported 00:26:19.124 Flexible Data Placement Supported: Not Supported 00:26:19.124 00:26:19.124 Controller Memory Buffer Support 00:26:19.124 ================================ 00:26:19.124 Supported: No 00:26:19.124 00:26:19.124 Persistent Memory Region Support 00:26:19.124 ================================ 00:26:19.124 Supported: No 00:26:19.124 00:26:19.124 Admin Command Set Attributes 00:26:19.124 ============================ 00:26:19.124 Security Send/Receive: Not Supported 00:26:19.124 Format NVM: Supported 00:26:19.124 Firmware Activate/Download: Not Supported 00:26:19.124 Namespace Management: Supported 00:26:19.124 Device Self-Test: Not Supported 00:26:19.124 Directives: Supported 00:26:19.124 NVMe-MI: Not Supported 00:26:19.124 Virtualization Management: Not Supported 00:26:19.124 Doorbell Buffer Config: Supported 00:26:19.124 Get LBA Status Capability: Not Supported 00:26:19.124 Command & Feature Lockdown Capability: Not Supported 00:26:19.124 Abort Command Limit: 4 00:26:19.124 Async Event Request Limit: 4 00:26:19.124 Number of Firmware Slots: N/A 00:26:19.124 Firmware Slot 1 Read-Only: N/A 00:26:19.124 Firmware Activation Without Reset: N/A 00:26:19.124 Multiple Update Detection Support: N/A 00:26:19.124 Firmware Update Granularity: No Information Provided 00:26:19.124 Per-Namespace SMART Log: Yes 00:26:19.124 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.124 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:26:19.124 Command Effects Log Page: Supported 00:26:19.124 Get Log Page Extended Data: Supported 00:26:19.124 Telemetry Log Pages: Not Supported 00:26:19.124 Persistent Event Log Pages: Not Supported 00:26:19.124 Supported Log Pages Log Page: May Support 00:26:19.124 Commands Supported & Effects Log Page: Not Supported 00:26:19.124 Feature Identifiers & Effects Log Page:May Support 00:26:19.124 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.124 Data Area 4 for Telemetry Log: Not Supported 00:26:19.124 Error Log Page Entries Supported: 1 00:26:19.124 Keep Alive: Not Supported 00:26:19.124 00:26:19.124 NVM Command Set Attributes 00:26:19.124 ========================== 00:26:19.124 Submission Queue Entry Size 00:26:19.124 Max: 64 00:26:19.124 Min: 64 00:26:19.124 Completion Queue Entry Size 00:26:19.124 Max: 16 00:26:19.124 Min: 16 00:26:19.124 Number of Namespaces: 256 00:26:19.124 Compare Command: Supported 00:26:19.124 Write Uncorrectable Command: Not Supported 00:26:19.124 Dataset Management Command: Supported 00:26:19.124 Write Zeroes Command: Supported 00:26:19.124 Set Features Save Field: Supported 00:26:19.124 Reservations: Not Supported 00:26:19.124 Timestamp: Supported 00:26:19.124 Copy: Supported 00:26:19.124 Volatile Write Cache: Present 00:26:19.124 Atomic Write Unit (Normal): 1 00:26:19.124 Atomic Write Unit (PFail): 1 00:26:19.124 Atomic Compare & Write Unit: 1 00:26:19.124 Fused Compare & Write: Not Supported 00:26:19.124 Scatter-Gather List 00:26:19.124 SGL Command Set: Supported 00:26:19.124 SGL Keyed: Not Supported 00:26:19.124 SGL Bit Bucket Descriptor: Not Supported 00:26:19.124 SGL Metadata Pointer: Not Supported 00:26:19.125 Oversized SGL: Not Supported 00:26:19.125 SGL Metadata Address: Not Supported 00:26:19.125 SGL Offset: Not Supported 00:26:19.125 Transport SGL Data Block: Not Supported 00:26:19.125 Replay Protected Memory Block: Not Supported 00:26:19.125 00:26:19.125 Firmware Slot Information 00:26:19.125 ========================= 00:26:19.125 Active slot: 1 00:26:19.125 Slot 1 Firmware Revision: 1.0 00:26:19.125 00:26:19.125 00:26:19.125 Commands Supported and Effects 00:26:19.125 ============================== 00:26:19.125 Admin Commands 00:26:19.125 -------------- 00:26:19.125 Delete I/O Submission Queue (00h): Supported 00:26:19.125 Create I/O Submission Queue (01h): Supported 00:26:19.125 Get Log Page (02h): Supported 00:26:19.125 Delete I/O Completion Queue (04h): Supported 00:26:19.125 Create I/O Completion Queue (05h): Supported 00:26:19.125 Identify (06h): Supported 00:26:19.125 Abort (08h): Supported 00:26:19.125 Set Features (09h): Supported 00:26:19.125 Get Features (0Ah): Supported 00:26:19.125 Asynchronous Event Request (0Ch): Supported 00:26:19.125 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.125 Directive Send (19h): Supported 00:26:19.125 Directive Receive (1Ah): Supported 00:26:19.125 Virtualization Management (1Ch): Supported 00:26:19.125 Doorbell Buffer Config (7Ch): Supported 00:26:19.125 Format NVM (80h): Supported LBA-Change 00:26:19.125 I/O Commands 00:26:19.125 ------------ 00:26:19.125 Flush (00h): Supported LBA-Change 00:26:19.125 Write (01h): Supported LBA-Change 00:26:19.125 Read (02h): Supported 00:26:19.125 Compare (05h): Supported 00:26:19.125 Write Zeroes (08h): Supported LBA-Change 00:26:19.125 Dataset Management (09h): Supported LBA-Change 00:26:19.125 Unknown (0Ch): Supported 00:26:19.125 Unknown (12h): Supported 00:26:19.125 Copy (19h): Supported LBA-Change 00:26:19.125 Unknown (1Dh): Supported LBA-Change 00:26:19.125 00:26:19.125 Error Log 00:26:19.125 ========= 00:26:19.125 00:26:19.125 Arbitration 00:26:19.125 =========== 00:26:19.125 Arbitration Burst: no limit 00:26:19.125 00:26:19.125 Power Management 00:26:19.125 ================ 00:26:19.125 Number of Power States: 1 00:26:19.125 Current Power State: Power State #0 00:26:19.125 Power State #0: 00:26:19.125 Max Power: 25.00 W 00:26:19.125 Non-Operational State: Operational 00:26:19.125 Entry Latency: 16 microseconds 00:26:19.125 Exit Latency: 4 microseconds 00:26:19.125 Relative Read Throughput: 0 00:26:19.125 Relative Read Latency: 0 00:26:19.125 Relative Write Throughput: 0 00:26:19.125 Relative Write Latency: 0 00:26:19.125 Idle Power: Not Reported 00:26:19.125 Active Power: Not Reported 00:26:19.125 Non-Operational Permissive Mode: Not Supported 00:26:19.125 00:26:19.125 Health Information 00:26:19.125 ================== 00:26:19.125 Critical Warnings: 00:26:19.125 Available Spare Space: OK 00:26:19.125 Temperature: [2024-12-06 06:52:51.536274] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64144 terminated unexpected 00:26:19.125 OK 00:26:19.125 Device Reliability: OK 00:26:19.125 Read Only: No 00:26:19.125 Volatile Memory Backup: OK 00:26:19.125 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.125 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.125 Available Spare: 0% 00:26:19.125 Available Spare Threshold: 0% 00:26:19.125 Life Percentage Used: 0% 00:26:19.125 Data Units Read: 943 00:26:19.125 Data Units Written: 810 00:26:19.125 Host Read Commands: 48497 00:26:19.125 Host Write Commands: 47258 00:26:19.125 Controller Busy Time: 0 minutes 00:26:19.125 Power Cycles: 0 00:26:19.125 Power On Hours: 0 hours 00:26:19.125 Unsafe Shutdowns: 0 00:26:19.125 Unrecoverable Media Errors: 0 00:26:19.125 Lifetime Error Log Entries: 0 00:26:19.125 Warning Temperature Time: 0 minutes 00:26:19.125 Critical Temperature Time: 0 minutes 00:26:19.125 00:26:19.125 Number of Queues 00:26:19.125 ================ 00:26:19.125 Number of I/O Submission Queues: 64 00:26:19.125 Number of I/O Completion Queues: 64 00:26:19.125 00:26:19.125 ZNS Specific Controller Data 00:26:19.125 ============================ 00:26:19.125 Zone Append Size Limit: 0 00:26:19.125 00:26:19.125 00:26:19.125 Active Namespaces 00:26:19.125 ================= 00:26:19.125 Namespace ID:1 00:26:19.125 Error Recovery Timeout: Unlimited 00:26:19.125 Command Set Identifier: NVM (00h) 00:26:19.125 Deallocate: Supported 00:26:19.125 Deallocated/Unwritten Error: Supported 00:26:19.125 Deallocated Read Value: All 0x00 00:26:19.125 Deallocate in Write Zeroes: Not Supported 00:26:19.125 Deallocated Guard Field: 0xFFFF 00:26:19.125 Flush: Supported 00:26:19.125 Reservation: Not Supported 00:26:19.125 Namespace Sharing Capabilities: Private 00:26:19.125 Size (in LBAs): 1310720 (5GiB) 00:26:19.125 Capacity (in LBAs): 1310720 (5GiB) 00:26:19.125 Utilization (in LBAs): 1310720 (5GiB) 00:26:19.125 Thin Provisioning: Not Supported 00:26:19.125 Per-NS Atomic Units: No 00:26:19.125 Maximum Single Source Range Length: 128 00:26:19.125 Maximum Copy Length: 128 00:26:19.125 Maximum Source Range Count: 128 00:26:19.125 NGUID/EUI64 Never Reused: No 00:26:19.125 Namespace Write Protected: No 00:26:19.125 Number of LBA Formats: 8 00:26:19.125 Current LBA Format: LBA Format #04 00:26:19.125 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.125 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.125 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.125 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.125 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.125 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.125 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.125 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.125 00:26:19.125 NVM Specific Namespace Data 00:26:19.125 =========================== 00:26:19.125 Logical Block Storage Tag Mask: 0 00:26:19.125 Protection Information Capabilities: 00:26:19.125 16b Guard Protection Information Storage Tag Support: No 00:26:19.125 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.125 Storage Tag Check Read Support: No 00:26:19.125 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.125 ===================================================== 00:26:19.125 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:19.125 ===================================================== 00:26:19.125 Controller Capabilities/Features 00:26:19.125 ================================ 00:26:19.125 Vendor ID: 1b36 00:26:19.125 Subsystem Vendor ID: 1af4 00:26:19.125 Serial Number: 12343 00:26:19.125 Model Number: QEMU NVMe Ctrl 00:26:19.125 Firmware Version: 8.0.0 00:26:19.125 Recommended Arb Burst: 6 00:26:19.125 IEEE OUI Identifier: 00 54 52 00:26:19.125 Multi-path I/O 00:26:19.125 May have multiple subsystem ports: No 00:26:19.125 May have multiple controllers: Yes 00:26:19.125 Associated with SR-IOV VF: No 00:26:19.125 Max Data Transfer Size: 524288 00:26:19.125 Max Number of Namespaces: 256 00:26:19.125 Max Number of I/O Queues: 64 00:26:19.125 NVMe Specification Version (VS): 1.4 00:26:19.125 NVMe Specification Version (Identify): 1.4 00:26:19.125 Maximum Queue Entries: 2048 00:26:19.125 Contiguous Queues Required: Yes 00:26:19.125 Arbitration Mechanisms Supported 00:26:19.125 Weighted Round Robin: Not Supported 00:26:19.125 Vendor Specific: Not Supported 00:26:19.125 Reset Timeout: 7500 ms 00:26:19.125 Doorbell Stride: 4 bytes 00:26:19.125 NVM Subsystem Reset: Not Supported 00:26:19.125 Command Sets Supported 00:26:19.125 NVM Command Set: Supported 00:26:19.125 Boot Partition: Not Supported 00:26:19.125 Memory Page Size Minimum: 4096 bytes 00:26:19.125 Memory Page Size Maximum: 65536 bytes 00:26:19.125 Persistent Memory Region: Not Supported 00:26:19.125 Optional Asynchronous Events Supported 00:26:19.126 Namespace Attribute Notices: Supported 00:26:19.126 Firmware Activation Notices: Not Supported 00:26:19.126 ANA Change Notices: Not Supported 00:26:19.126 PLE Aggregate Log Change Notices: Not Supported 00:26:19.126 LBA Status Info Alert Notices: Not Supported 00:26:19.126 EGE Aggregate Log Change Notices: Not Supported 00:26:19.126 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.126 Zone Descriptor Change Notices: Not Supported 00:26:19.126 Discovery Log Change Notices: Not Supported 00:26:19.126 Controller Attributes 00:26:19.126 128-bit Host Identifier: Not Supported 00:26:19.126 Non-Operational Permissive Mode: Not Supported 00:26:19.126 NVM Sets: Not Supported 00:26:19.126 Read Recovery Levels: Not Supported 00:26:19.126 Endurance Groups: Supported 00:26:19.126 Predictable Latency Mode: Not Supported 00:26:19.126 Traffic Based Keep ALive: Not Supported 00:26:19.126 Namespace Granularity: Not Supported 00:26:19.126 SQ Associations: Not Supported 00:26:19.126 UUID List: Not Supported 00:26:19.126 Multi-Domain Subsystem: Not Supported 00:26:19.126 Fixed Capacity Management: Not Supported 00:26:19.126 Variable Capacity Management: Not Supported 00:26:19.126 Delete Endurance Group: Not Supported 00:26:19.126 Delete NVM Set: Not Supported 00:26:19.126 Extended LBA Formats Supported: Supported 00:26:19.126 Flexible Data Placement Supported: Supported 00:26:19.126 00:26:19.126 Controller Memory Buffer Support 00:26:19.126 ================================ 00:26:19.126 Supported: No 00:26:19.126 00:26:19.126 Persistent Memory Region Support 00:26:19.126 ================================ 00:26:19.126 Supported: No 00:26:19.126 00:26:19.126 Admin Command Set Attributes 00:26:19.126 ============================ 00:26:19.126 Security Send/Receive: Not Supported 00:26:19.126 Format NVM: Supported 00:26:19.126 Firmware Activate/Download: Not Supported 00:26:19.126 Namespace Management: Supported 00:26:19.126 Device Self-Test: Not Supported 00:26:19.126 Directives: Supported 00:26:19.126 NVMe-MI: Not Supported 00:26:19.126 Virtualization Management: Not Supported 00:26:19.126 Doorbell Buffer Config: Supported 00:26:19.126 Get LBA Status Capability: Not Supported 00:26:19.126 Command & Feature Lockdown Capability: Not Supported 00:26:19.126 Abort Command Limit: 4 00:26:19.126 Async Event Request Limit: 4 00:26:19.126 Number of Firmware Slots: N/A 00:26:19.126 Firmware Slot 1 Read-Only: N/A 00:26:19.126 Firmware Activation Without Reset: N/A 00:26:19.126 Multiple Update Detection Support: N/A 00:26:19.126 Firmware Update Granularity: No Information Provided 00:26:19.126 Per-Namespace SMART Log: Yes 00:26:19.126 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.126 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:26:19.126 Command Effects Log Page: Supported 00:26:19.126 Get Log Page Extended Data: Supported 00:26:19.126 Telemetry Log Pages: Not Supported 00:26:19.126 Persistent Event Log Pages: Not Supported 00:26:19.126 Supported Log Pages Log Page: May Support 00:26:19.126 Commands Supported & Effects Log Page: Not Supported 00:26:19.126 Feature Identifiers & Effects Log Page:May Support 00:26:19.126 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.126 Data Area 4 for Telemetry Log: Not Supported 00:26:19.126 Error Log Page Entries Supported: 1 00:26:19.126 Keep Alive: Not Supported 00:26:19.126 00:26:19.126 NVM Command Set Attributes 00:26:19.126 ========================== 00:26:19.126 Submission Queue Entry Size 00:26:19.126 Max: 64 00:26:19.126 Min: 64 00:26:19.126 Completion Queue Entry Size 00:26:19.126 Max: 16 00:26:19.126 Min: 16 00:26:19.126 Number of Namespaces: 256 00:26:19.126 Compare Command: Supported 00:26:19.126 Write Uncorrectable Command: Not Supported 00:26:19.126 Dataset Management Command: Supported 00:26:19.126 Write Zeroes Command: Supported 00:26:19.126 Set Features Save Field: Supported 00:26:19.126 Reservations: Not Supported 00:26:19.126 Timestamp: Supported 00:26:19.126 Copy: Supported 00:26:19.126 Volatile Write Cache: Present 00:26:19.126 Atomic Write Unit (Normal): 1 00:26:19.126 Atomic Write Unit (PFail): 1 00:26:19.126 Atomic Compare & Write Unit: 1 00:26:19.126 Fused Compare & Write: Not Supported 00:26:19.126 Scatter-Gather List 00:26:19.126 SGL Command Set: Supported 00:26:19.126 SGL Keyed: Not Supported 00:26:19.126 SGL Bit Bucket Descriptor: Not Supported 00:26:19.126 SGL Metadata Pointer: Not Supported 00:26:19.126 Oversized SGL: Not Supported 00:26:19.126 SGL Metadata Address: Not Supported 00:26:19.126 SGL Offset: Not Supported 00:26:19.126 Transport SGL Data Block: Not Supported 00:26:19.126 Replay Protected Memory Block: Not Supported 00:26:19.126 00:26:19.126 Firmware Slot Information 00:26:19.126 ========================= 00:26:19.126 Active slot: 1 00:26:19.126 Slot 1 Firmware Revision: 1.0 00:26:19.126 00:26:19.126 00:26:19.126 Commands Supported and Effects 00:26:19.126 ============================== 00:26:19.126 Admin Commands 00:26:19.126 -------------- 00:26:19.126 Delete I/O Submission Queue (00h): Supported 00:26:19.126 Create I/O Submission Queue (01h): Supported 00:26:19.126 Get Log Page (02h): Supported 00:26:19.126 Delete I/O Completion Queue (04h): Supported 00:26:19.126 Create I/O Completion Queue (05h): Supported 00:26:19.126 Identify (06h): Supported 00:26:19.126 Abort (08h): Supported 00:26:19.126 Set Features (09h): Supported 00:26:19.126 Get Features (0Ah): Supported 00:26:19.126 Asynchronous Event Request (0Ch): Supported 00:26:19.126 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.126 Directive Send (19h): Supported 00:26:19.126 Directive Receive (1Ah): Supported 00:26:19.126 Virtualization Management (1Ch): Supported 00:26:19.126 Doorbell Buffer Config (7Ch): Supported 00:26:19.126 Format NVM (80h): Supported LBA-Change 00:26:19.126 I/O Commands 00:26:19.126 ------------ 00:26:19.126 Flush (00h): Supported LBA-Change 00:26:19.126 Write (01h): Supported LBA-Change 00:26:19.126 Read (02h): Supported 00:26:19.126 Compare (05h): Supported 00:26:19.126 Write Zeroes (08h): Supported LBA-Change 00:26:19.126 Dataset Management (09h): Supported LBA-Change 00:26:19.126 Unknown (0Ch): Supported 00:26:19.126 Unknown (12h): Supported 00:26:19.126 Copy (19h): Supported LBA-Change 00:26:19.126 Unknown (1Dh): Supported LBA-Change 00:26:19.126 00:26:19.126 Error Log 00:26:19.126 ========= 00:26:19.126 00:26:19.126 Arbitration 00:26:19.126 =========== 00:26:19.126 Arbitration Burst: no limit 00:26:19.126 00:26:19.126 Power Management 00:26:19.126 ================ 00:26:19.126 Number of Power States: 1 00:26:19.126 Current Power State: Power State #0 00:26:19.126 Power State #0: 00:26:19.126 Max Power: 25.00 W 00:26:19.126 Non-Operational State: Operational 00:26:19.126 Entry Latency: 16 microseconds 00:26:19.126 Exit Latency: 4 microseconds 00:26:19.126 Relative Read Throughput: 0 00:26:19.126 Relative Read Latency: 0 00:26:19.126 Relative Write Throughput: 0 00:26:19.126 Relative Write Latency: 0 00:26:19.126 Idle Power: Not Reported 00:26:19.126 Active Power: Not Reported 00:26:19.126 Non-Operational Permissive Mode: Not Supported 00:26:19.126 00:26:19.126 Health Information 00:26:19.126 ================== 00:26:19.126 Critical Warnings: 00:26:19.126 Available Spare Space: OK 00:26:19.126 Temperature: OK 00:26:19.126 Device Reliability: OK 00:26:19.126 Read Only: No 00:26:19.126 Volatile Memory Backup: OK 00:26:19.126 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.126 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.126 Available Spare: 0% 00:26:19.126 Available Spare Threshold: 0% 00:26:19.126 Life Percentage Used: 0% 00:26:19.126 Data Units Read: 769 00:26:19.126 Data Units Written: 699 00:26:19.126 Host Read Commands: 34109 00:26:19.127 Host Write Commands: 33532 00:26:19.127 Controller Busy Time: 0 minutes 00:26:19.127 Power Cycles: 0 00:26:19.127 Power On Hours: 0 hours 00:26:19.127 Unsafe Shutdowns: 0 00:26:19.127 Unrecoverable Media Errors: 0 00:26:19.127 Lifetime Error Log Entries: 0 00:26:19.127 Warning Temperature Time: 0 minutes 00:26:19.127 Critical Temperature Time: 0 minutes 00:26:19.127 00:26:19.127 Number of Queues 00:26:19.127 ================ 00:26:19.127 Number of I/O Submission Queues: 64 00:26:19.127 Number of I/O Completion Queues: 64 00:26:19.127 00:26:19.127 ZNS Specific Controller Data 00:26:19.127 ============================ 00:26:19.127 Zone Append Size Limit: 0 00:26:19.127 00:26:19.127 00:26:19.127 Active Namespaces 00:26:19.127 ================= 00:26:19.127 Namespace ID:1 00:26:19.127 Error Recovery Timeout: Unlimited 00:26:19.127 Command Set Identifier: NVM (00h) 00:26:19.127 Deallocate: Supported 00:26:19.127 Deallocated/Unwritten Error: Supported 00:26:19.127 Deallocated Read Value: All 0x00 00:26:19.127 Deallocate in Write Zeroes: Not Supported 00:26:19.127 Deallocated Guard Field: 0xFFFF 00:26:19.127 Flush: Supported 00:26:19.127 Reservation: Not Supported 00:26:19.127 Namespace Sharing Capabilities: Multiple Controllers 00:26:19.127 Size (in LBAs): 262144 (1GiB) 00:26:19.127 Capacity (in LBAs): 262144 (1GiB) 00:26:19.127 Utilization (in LBAs): 262144 (1GiB) 00:26:19.127 Thin Provisioning: Not Supported 00:26:19.127 Per-NS Atomic Units: No 00:26:19.127 Maximum Single Source Range Length: 128 00:26:19.127 Maximum Copy Length: 128 00:26:19.127 Maximum Source Range Count: 128 00:26:19.127 NGUID/EUI64 Never Reused: No 00:26:19.127 Namespace Write Protected: No 00:26:19.127 Endurance group ID: 1 00:26:19.127 Number of LBA Formats: 8 00:26:19.127 Current LBA Format: LBA Format #04 00:26:19.127 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.127 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.127 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.127 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.127 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.127 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.127 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.127 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.127 00:26:19.127 Get Feature FDP: 00:26:19.127 ================ 00:26:19.127 Enabled: Yes 00:26:19.127 FDP configuration index: 0 00:26:19.127 00:26:19.127 FDP configurations log page 00:26:19.127 =========================== 00:26:19.127 Number of FDP configurations: 1 00:26:19.127 Version: 0 00:26:19.127 Size: 112 00:26:19.127 FDP Configuration Descriptor: 0 00:26:19.127 Descriptor Size: 96 00:26:19.127 Reclaim Group Identifier format: 2 00:26:19.127 FDP Volatile Write Cache: Not Present 00:26:19.127 FDP Configuration: Valid 00:26:19.127 Vendor Specific Size: 0 00:26:19.127 Number of Reclaim Groups: 2 00:26:19.127 Number of Recalim Unit Handles: 8 00:26:19.127 Max Placement Identifiers: 128 00:26:19.127 Number of Namespaces Suppprted: 256 00:26:19.127 Reclaim unit Nominal Size: 6000000 bytes 00:26:19.127 Estimated Reclaim Unit Time Limit: Not Reported 00:26:19.127 RUH Desc #000: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #001: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #002: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #003: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #004: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #005: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #006: RUH Type: Initially Isolated 00:26:19.127 RUH Desc #007: RUH Type: Initially Isolated 00:26:19.127 00:26:19.127 FDP reclaim unit handle usage log page 00:26:19.127 ====================================== 00:26:19.127 Number of Reclaim Unit Handles: 8 00:26:19.127 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:26:19.127 RUH Usage Desc #001: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #002: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #003: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #004: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #005: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #006: RUH Attributes: Unused 00:26:19.127 RUH Usage Desc #007: RUH Attributes: Unused 00:26:19.127 00:26:19.127 FDP statistics log page 00:26:19.127 ======================= 00:26:19.127 Host bytes with metadata written: 438673408 00:26:19.127 Media[2024-12-06 06:52:51.538016] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64144 terminated unexpected 00:26:19.127 bytes with metadata written: 438738944 00:26:19.127 Media bytes erased: 0 00:26:19.127 00:26:19.127 FDP events log page 00:26:19.127 =================== 00:26:19.127 Number of FDP events: 0 00:26:19.127 00:26:19.127 NVM Specific Namespace Data 00:26:19.127 =========================== 00:26:19.127 Logical Block Storage Tag Mask: 0 00:26:19.127 Protection Information Capabilities: 00:26:19.127 16b Guard Protection Information Storage Tag Support: No 00:26:19.127 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.127 Storage Tag Check Read Support: No 00:26:19.127 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.127 ===================================================== 00:26:19.127 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:19.127 ===================================================== 00:26:19.127 Controller Capabilities/Features 00:26:19.127 ================================ 00:26:19.127 Vendor ID: 1b36 00:26:19.127 Subsystem Vendor ID: 1af4 00:26:19.127 Serial Number: 12342 00:26:19.127 Model Number: QEMU NVMe Ctrl 00:26:19.127 Firmware Version: 8.0.0 00:26:19.127 Recommended Arb Burst: 6 00:26:19.127 IEEE OUI Identifier: 00 54 52 00:26:19.127 Multi-path I/O 00:26:19.127 May have multiple subsystem ports: No 00:26:19.127 May have multiple controllers: No 00:26:19.127 Associated with SR-IOV VF: No 00:26:19.127 Max Data Transfer Size: 524288 00:26:19.127 Max Number of Namespaces: 256 00:26:19.127 Max Number of I/O Queues: 64 00:26:19.127 NVMe Specification Version (VS): 1.4 00:26:19.127 NVMe Specification Version (Identify): 1.4 00:26:19.127 Maximum Queue Entries: 2048 00:26:19.127 Contiguous Queues Required: Yes 00:26:19.127 Arbitration Mechanisms Supported 00:26:19.127 Weighted Round Robin: Not Supported 00:26:19.127 Vendor Specific: Not Supported 00:26:19.127 Reset Timeout: 7500 ms 00:26:19.127 Doorbell Stride: 4 bytes 00:26:19.127 NVM Subsystem Reset: Not Supported 00:26:19.127 Command Sets Supported 00:26:19.127 NVM Command Set: Supported 00:26:19.127 Boot Partition: Not Supported 00:26:19.127 Memory Page Size Minimum: 4096 bytes 00:26:19.127 Memory Page Size Maximum: 65536 bytes 00:26:19.127 Persistent Memory Region: Not Supported 00:26:19.127 Optional Asynchronous Events Supported 00:26:19.127 Namespace Attribute Notices: Supported 00:26:19.128 Firmware Activation Notices: Not Supported 00:26:19.128 ANA Change Notices: Not Supported 00:26:19.128 PLE Aggregate Log Change Notices: Not Supported 00:26:19.128 LBA Status Info Alert Notices: Not Supported 00:26:19.128 EGE Aggregate Log Change Notices: Not Supported 00:26:19.128 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.128 Zone Descriptor Change Notices: Not Supported 00:26:19.128 Discovery Log Change Notices: Not Supported 00:26:19.128 Controller Attributes 00:26:19.128 128-bit Host Identifier: Not Supported 00:26:19.128 Non-Operational Permissive Mode: Not Supported 00:26:19.128 NVM Sets: Not Supported 00:26:19.128 Read Recovery Levels: Not Supported 00:26:19.128 Endurance Groups: Not Supported 00:26:19.128 Predictable Latency Mode: Not Supported 00:26:19.128 Traffic Based Keep ALive: Not Supported 00:26:19.128 Namespace Granularity: Not Supported 00:26:19.128 SQ Associations: Not Supported 00:26:19.128 UUID List: Not Supported 00:26:19.128 Multi-Domain Subsystem: Not Supported 00:26:19.128 Fixed Capacity Management: Not Supported 00:26:19.128 Variable Capacity Management: Not Supported 00:26:19.128 Delete Endurance Group: Not Supported 00:26:19.128 Delete NVM Set: Not Supported 00:26:19.128 Extended LBA Formats Supported: Supported 00:26:19.128 Flexible Data Placement Supported: Not Supported 00:26:19.128 00:26:19.128 Controller Memory Buffer Support 00:26:19.128 ================================ 00:26:19.128 Supported: No 00:26:19.128 00:26:19.128 Persistent Memory Region Support 00:26:19.128 ================================ 00:26:19.128 Supported: No 00:26:19.128 00:26:19.128 Admin Command Set Attributes 00:26:19.128 ============================ 00:26:19.128 Security Send/Receive: Not Supported 00:26:19.128 Format NVM: Supported 00:26:19.128 Firmware Activate/Download: Not Supported 00:26:19.128 Namespace Management: Supported 00:26:19.128 Device Self-Test: Not Supported 00:26:19.128 Directives: Supported 00:26:19.128 NVMe-MI: Not Supported 00:26:19.128 Virtualization Management: Not Supported 00:26:19.128 Doorbell Buffer Config: Supported 00:26:19.128 Get LBA Status Capability: Not Supported 00:26:19.128 Command & Feature Lockdown Capability: Not Supported 00:26:19.128 Abort Command Limit: 4 00:26:19.128 Async Event Request Limit: 4 00:26:19.128 Number of Firmware Slots: N/A 00:26:19.128 Firmware Slot 1 Read-Only: N/A 00:26:19.128 Firmware Activation Without Reset: N/A 00:26:19.128 Multiple Update Detection Support: N/A 00:26:19.128 Firmware Update Granularity: No Information Provided 00:26:19.128 Per-Namespace SMART Log: Yes 00:26:19.128 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.128 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:26:19.128 Command Effects Log Page: Supported 00:26:19.128 Get Log Page Extended Data: Supported 00:26:19.128 Telemetry Log Pages: Not Supported 00:26:19.128 Persistent Event Log Pages: Not Supported 00:26:19.128 Supported Log Pages Log Page: May Support 00:26:19.128 Commands Supported & Effects Log Page: Not Supported 00:26:19.128 Feature Identifiers & Effects Log Page:May Support 00:26:19.128 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.128 Data Area 4 for Telemetry Log: Not Supported 00:26:19.128 Error Log Page Entries Supported: 1 00:26:19.128 Keep Alive: Not Supported 00:26:19.128 00:26:19.128 NVM Command Set Attributes 00:26:19.128 ========================== 00:26:19.128 Submission Queue Entry Size 00:26:19.128 Max: 64 00:26:19.128 Min: 64 00:26:19.128 Completion Queue Entry Size 00:26:19.128 Max: 16 00:26:19.128 Min: 16 00:26:19.128 Number of Namespaces: 256 00:26:19.128 Compare Command: Supported 00:26:19.128 Write Uncorrectable Command: Not Supported 00:26:19.128 Dataset Management Command: Supported 00:26:19.128 Write Zeroes Command: Supported 00:26:19.128 Set Features Save Field: Supported 00:26:19.128 Reservations: Not Supported 00:26:19.128 Timestamp: Supported 00:26:19.128 Copy: Supported 00:26:19.128 Volatile Write Cache: Present 00:26:19.128 Atomic Write Unit (Normal): 1 00:26:19.128 Atomic Write Unit (PFail): 1 00:26:19.128 Atomic Compare & Write Unit: 1 00:26:19.128 Fused Compare & Write: Not Supported 00:26:19.128 Scatter-Gather List 00:26:19.128 SGL Command Set: Supported 00:26:19.128 SGL Keyed: Not Supported 00:26:19.128 SGL Bit Bucket Descriptor: Not Supported 00:26:19.128 SGL Metadata Pointer: Not Supported 00:26:19.128 Oversized SGL: Not Supported 00:26:19.128 SGL Metadata Address: Not Supported 00:26:19.128 SGL Offset: Not Supported 00:26:19.128 Transport SGL Data Block: Not Supported 00:26:19.128 Replay Protected Memory Block: Not Supported 00:26:19.128 00:26:19.128 Firmware Slot Information 00:26:19.128 ========================= 00:26:19.128 Active slot: 1 00:26:19.128 Slot 1 Firmware Revision: 1.0 00:26:19.128 00:26:19.128 00:26:19.128 Commands Supported and Effects 00:26:19.128 ============================== 00:26:19.128 Admin Commands 00:26:19.128 -------------- 00:26:19.128 Delete I/O Submission Queue (00h): Supported 00:26:19.128 Create I/O Submission Queue (01h): Supported 00:26:19.128 Get Log Page (02h): Supported 00:26:19.128 Delete I/O Completion Queue (04h): Supported 00:26:19.128 Create I/O Completion Queue (05h): Supported 00:26:19.128 Identify (06h): Supported 00:26:19.128 Abort (08h): Supported 00:26:19.128 Set Features (09h): Supported 00:26:19.128 Get Features (0Ah): Supported 00:26:19.128 Asynchronous Event Request (0Ch): Supported 00:26:19.128 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.128 Directive Send (19h): Supported 00:26:19.128 Directive Receive (1Ah): Supported 00:26:19.128 Virtualization Management (1Ch): Supported 00:26:19.128 Doorbell Buffer Config (7Ch): Supported 00:26:19.128 Format NVM (80h): Supported LBA-Change 00:26:19.128 I/O Commands 00:26:19.128 ------------ 00:26:19.128 Flush (00h): Supported LBA-Change 00:26:19.128 Write (01h): Supported LBA-Change 00:26:19.128 Read (02h): Supported 00:26:19.128 Compare (05h): Supported 00:26:19.128 Write Zeroes (08h): Supported LBA-Change 00:26:19.128 Dataset Management (09h): Supported LBA-Change 00:26:19.128 Unknown (0Ch): Supported 00:26:19.128 Unknown (12h): Supported 00:26:19.128 Copy (19h): Supported LBA-Change 00:26:19.128 Unknown (1Dh): Supported LBA-Change 00:26:19.128 00:26:19.128 Error Log 00:26:19.128 ========= 00:26:19.128 00:26:19.128 Arbitration 00:26:19.128 =========== 00:26:19.128 Arbitration Burst: no limit 00:26:19.128 00:26:19.128 Power Management 00:26:19.128 ================ 00:26:19.128 Number of Power States: 1 00:26:19.128 Current Power State: Power State #0 00:26:19.128 Power State #0: 00:26:19.128 Max Power: 25.00 W 00:26:19.128 Non-Operational State: Operational 00:26:19.128 Entry Latency: 16 microseconds 00:26:19.128 Exit Latency: 4 microseconds 00:26:19.128 Relative Read Throughput: 0 00:26:19.128 Relative Read Latency: 0 00:26:19.128 Relative Write Throughput: 0 00:26:19.128 Relative Write Latency: 0 00:26:19.128 Idle Power: Not Reported 00:26:19.128 Active Power: Not Reported 00:26:19.128 Non-Operational Permissive Mode: Not Supported 00:26:19.128 00:26:19.128 Health Information 00:26:19.128 ================== 00:26:19.128 Critical Warnings: 00:26:19.128 Available Spare Space: OK 00:26:19.128 Temperature: OK 00:26:19.128 Device Reliability: OK 00:26:19.128 Read Only: No 00:26:19.128 Volatile Memory Backup: OK 00:26:19.128 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.128 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.128 Available Spare: 0% 00:26:19.128 Available Spare Threshold: 0% 00:26:19.128 Life Percentage Used: 0% 00:26:19.128 Data Units Read: 2046 00:26:19.128 Data Units Written: 1833 00:26:19.128 Host Read Commands: 99981 00:26:19.128 Host Write Commands: 98250 00:26:19.128 Controller Busy Time: 0 minutes 00:26:19.128 Power Cycles: 0 00:26:19.128 Power On Hours: 0 hours 00:26:19.128 Unsafe Shutdowns: 0 00:26:19.128 Unrecoverable Media Errors: 0 00:26:19.128 Lifetime Error Log Entries: 0 00:26:19.128 Warning Temperature Time: 0 minutes 00:26:19.128 Critical Temperature Time: 0 minutes 00:26:19.128 00:26:19.128 Number of Queues 00:26:19.128 ================ 00:26:19.128 Number of I/O Submission Queues: 64 00:26:19.128 Number of I/O Completion Queues: 64 00:26:19.128 00:26:19.128 ZNS Specific Controller Data 00:26:19.128 ============================ 00:26:19.128 Zone Append Size Limit: 0 00:26:19.128 00:26:19.128 00:26:19.128 Active Namespaces 00:26:19.128 ================= 00:26:19.128 Namespace ID:1 00:26:19.128 Error Recovery Timeout: Unlimited 00:26:19.128 Command Set Identifier: NVM (00h) 00:26:19.128 Deallocate: Supported 00:26:19.128 Deallocated/Unwritten Error: Supported 00:26:19.129 Deallocated Read Value: All 0x00 00:26:19.129 Deallocate in Write Zeroes: Not Supported 00:26:19.129 Deallocated Guard Field: 0xFFFF 00:26:19.129 Flush: Supported 00:26:19.129 Reservation: Not Supported 00:26:19.129 Namespace Sharing Capabilities: Private 00:26:19.129 Size (in LBAs): 1048576 (4GiB) 00:26:19.129 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.129 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.129 Thin Provisioning: Not Supported 00:26:19.129 Per-NS Atomic Units: No 00:26:19.129 Maximum Single Source Range Length: 128 00:26:19.129 Maximum Copy Length: 128 00:26:19.129 Maximum Source Range Count: 128 00:26:19.129 NGUID/EUI64 Never Reused: No 00:26:19.129 Namespace Write Protected: No 00:26:19.129 Number of LBA Formats: 8 00:26:19.129 Current LBA Format: LBA Format #04 00:26:19.129 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.129 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.129 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.129 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.129 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.129 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.129 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.129 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.129 00:26:19.129 NVM Specific Namespace Data 00:26:19.129 =========================== 00:26:19.129 Logical Block Storage Tag Mask: 0 00:26:19.129 Protection Information Capabilities: 00:26:19.129 16b Guard Protection Information Storage Tag Support: No 00:26:19.129 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.129 Storage Tag Check Read Support: No 00:26:19.129 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Namespace ID:2 00:26:19.129 Error Recovery Timeout: Unlimited 00:26:19.129 Command Set Identifier: NVM (00h) 00:26:19.129 Deallocate: Supported 00:26:19.129 Deallocated/Unwritten Error: Supported 00:26:19.129 Deallocated Read Value: All 0x00 00:26:19.129 Deallocate in Write Zeroes: Not Supported 00:26:19.129 Deallocated Guard Field: 0xFFFF 00:26:19.129 Flush: Supported 00:26:19.129 Reservation: Not Supported 00:26:19.129 Namespace Sharing Capabilities: Private 00:26:19.129 Size (in LBAs): 1048576 (4GiB) 00:26:19.129 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.129 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.129 Thin Provisioning: Not Supported 00:26:19.129 Per-NS Atomic Units: No 00:26:19.129 Maximum Single Source Range Length: 128 00:26:19.129 Maximum Copy Length: 128 00:26:19.129 Maximum Source Range Count: 128 00:26:19.129 NGUID/EUI64 Never Reused: No 00:26:19.129 Namespace Write Protected: No 00:26:19.129 Number of LBA Formats: 8 00:26:19.129 Current LBA Format: LBA Format #04 00:26:19.129 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.129 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.129 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.129 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.129 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.129 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.129 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.129 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.129 00:26:19.129 NVM Specific Namespace Data 00:26:19.129 =========================== 00:26:19.129 Logical Block Storage Tag Mask: 0 00:26:19.129 Protection Information Capabilities: 00:26:19.129 16b Guard Protection Information Storage Tag Support: No 00:26:19.129 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.129 Storage Tag Check Read Support: No 00:26:19.129 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Namespace ID:3 00:26:19.129 Error Recovery Timeout: Unlimited 00:26:19.129 Command Set Identifier: NVM (00h) 00:26:19.129 Deallocate: Supported 00:26:19.129 Deallocated/Unwritten Error: Supported 00:26:19.129 Deallocated Read Value: All 0x00 00:26:19.129 Deallocate in Write Zeroes: Not Supported 00:26:19.129 Deallocated Guard Field: 0xFFFF 00:26:19.129 Flush: Supported 00:26:19.129 Reservation: Not Supported 00:26:19.129 Namespace Sharing Capabilities: Private 00:26:19.129 Size (in LBAs): 1048576 (4GiB) 00:26:19.129 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.129 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.129 Thin Provisioning: Not Supported 00:26:19.129 Per-NS Atomic Units: No 00:26:19.129 Maximum Single Source Range Length: 128 00:26:19.129 Maximum Copy Length: 128 00:26:19.129 Maximum Source Range Count: 128 00:26:19.129 NGUID/EUI64 Never Reused: No 00:26:19.129 Namespace Write Protected: No 00:26:19.129 Number of LBA Formats: 8 00:26:19.129 Current LBA Format: LBA Format #04 00:26:19.129 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.129 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.129 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.129 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.129 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.129 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.129 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.129 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.129 00:26:19.129 NVM Specific Namespace Data 00:26:19.129 =========================== 00:26:19.129 Logical Block Storage Tag Mask: 0 00:26:19.129 Protection Information Capabilities: 00:26:19.129 16b Guard Protection Information Storage Tag Support: No 00:26:19.129 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.129 Storage Tag Check Read Support: No 00:26:19.129 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.129 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:19.129 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:19.389 ===================================================== 00:26:19.389 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:19.389 ===================================================== 00:26:19.389 Controller Capabilities/Features 00:26:19.389 ================================ 00:26:19.389 Vendor ID: 1b36 00:26:19.389 Subsystem Vendor ID: 1af4 00:26:19.389 Serial Number: 12340 00:26:19.389 Model Number: QEMU NVMe Ctrl 00:26:19.389 Firmware Version: 8.0.0 00:26:19.389 Recommended Arb Burst: 6 00:26:19.389 IEEE OUI Identifier: 00 54 52 00:26:19.389 Multi-path I/O 00:26:19.389 May have multiple subsystem ports: No 00:26:19.389 May have multiple controllers: No 00:26:19.389 Associated with SR-IOV VF: No 00:26:19.389 Max Data Transfer Size: 524288 00:26:19.389 Max Number of Namespaces: 256 00:26:19.389 Max Number of I/O Queues: 64 00:26:19.389 NVMe Specification Version (VS): 1.4 00:26:19.389 NVMe Specification Version (Identify): 1.4 00:26:19.389 Maximum Queue Entries: 2048 00:26:19.389 Contiguous Queues Required: Yes 00:26:19.389 Arbitration Mechanisms Supported 00:26:19.389 Weighted Round Robin: Not Supported 00:26:19.389 Vendor Specific: Not Supported 00:26:19.389 Reset Timeout: 7500 ms 00:26:19.389 Doorbell Stride: 4 bytes 00:26:19.389 NVM Subsystem Reset: Not Supported 00:26:19.389 Command Sets Supported 00:26:19.389 NVM Command Set: Supported 00:26:19.389 Boot Partition: Not Supported 00:26:19.389 Memory Page Size Minimum: 4096 bytes 00:26:19.389 Memory Page Size Maximum: 65536 bytes 00:26:19.389 Persistent Memory Region: Not Supported 00:26:19.389 Optional Asynchronous Events Supported 00:26:19.389 Namespace Attribute Notices: Supported 00:26:19.389 Firmware Activation Notices: Not Supported 00:26:19.389 ANA Change Notices: Not Supported 00:26:19.389 PLE Aggregate Log Change Notices: Not Supported 00:26:19.389 LBA Status Info Alert Notices: Not Supported 00:26:19.389 EGE Aggregate Log Change Notices: Not Supported 00:26:19.389 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.389 Zone Descriptor Change Notices: Not Supported 00:26:19.389 Discovery Log Change Notices: Not Supported 00:26:19.389 Controller Attributes 00:26:19.389 128-bit Host Identifier: Not Supported 00:26:19.389 Non-Operational Permissive Mode: Not Supported 00:26:19.390 NVM Sets: Not Supported 00:26:19.390 Read Recovery Levels: Not Supported 00:26:19.390 Endurance Groups: Not Supported 00:26:19.390 Predictable Latency Mode: Not Supported 00:26:19.390 Traffic Based Keep ALive: Not Supported 00:26:19.390 Namespace Granularity: Not Supported 00:26:19.390 SQ Associations: Not Supported 00:26:19.390 UUID List: Not Supported 00:26:19.390 Multi-Domain Subsystem: Not Supported 00:26:19.390 Fixed Capacity Management: Not Supported 00:26:19.390 Variable Capacity Management: Not Supported 00:26:19.390 Delete Endurance Group: Not Supported 00:26:19.390 Delete NVM Set: Not Supported 00:26:19.390 Extended LBA Formats Supported: Supported 00:26:19.390 Flexible Data Placement Supported: Not Supported 00:26:19.390 00:26:19.390 Controller Memory Buffer Support 00:26:19.390 ================================ 00:26:19.390 Supported: No 00:26:19.390 00:26:19.390 Persistent Memory Region Support 00:26:19.390 ================================ 00:26:19.390 Supported: No 00:26:19.390 00:26:19.390 Admin Command Set Attributes 00:26:19.390 ============================ 00:26:19.390 Security Send/Receive: Not Supported 00:26:19.390 Format NVM: Supported 00:26:19.390 Firmware Activate/Download: Not Supported 00:26:19.390 Namespace Management: Supported 00:26:19.390 Device Self-Test: Not Supported 00:26:19.390 Directives: Supported 00:26:19.390 NVMe-MI: Not Supported 00:26:19.390 Virtualization Management: Not Supported 00:26:19.390 Doorbell Buffer Config: Supported 00:26:19.390 Get LBA Status Capability: Not Supported 00:26:19.390 Command & Feature Lockdown Capability: Not Supported 00:26:19.390 Abort Command Limit: 4 00:26:19.390 Async Event Request Limit: 4 00:26:19.390 Number of Firmware Slots: N/A 00:26:19.390 Firmware Slot 1 Read-Only: N/A 00:26:19.390 Firmware Activation Without Reset: N/A 00:26:19.390 Multiple Update Detection Support: N/A 00:26:19.390 Firmware Update Granularity: No Information Provided 00:26:19.390 Per-Namespace SMART Log: Yes 00:26:19.390 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.390 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:19.390 Command Effects Log Page: Supported 00:26:19.390 Get Log Page Extended Data: Supported 00:26:19.390 Telemetry Log Pages: Not Supported 00:26:19.390 Persistent Event Log Pages: Not Supported 00:26:19.390 Supported Log Pages Log Page: May Support 00:26:19.390 Commands Supported & Effects Log Page: Not Supported 00:26:19.390 Feature Identifiers & Effects Log Page:May Support 00:26:19.390 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.390 Data Area 4 for Telemetry Log: Not Supported 00:26:19.390 Error Log Page Entries Supported: 1 00:26:19.390 Keep Alive: Not Supported 00:26:19.390 00:26:19.390 NVM Command Set Attributes 00:26:19.390 ========================== 00:26:19.390 Submission Queue Entry Size 00:26:19.390 Max: 64 00:26:19.390 Min: 64 00:26:19.390 Completion Queue Entry Size 00:26:19.390 Max: 16 00:26:19.390 Min: 16 00:26:19.390 Number of Namespaces: 256 00:26:19.390 Compare Command: Supported 00:26:19.390 Write Uncorrectable Command: Not Supported 00:26:19.390 Dataset Management Command: Supported 00:26:19.390 Write Zeroes Command: Supported 00:26:19.390 Set Features Save Field: Supported 00:26:19.390 Reservations: Not Supported 00:26:19.390 Timestamp: Supported 00:26:19.390 Copy: Supported 00:26:19.390 Volatile Write Cache: Present 00:26:19.390 Atomic Write Unit (Normal): 1 00:26:19.390 Atomic Write Unit (PFail): 1 00:26:19.390 Atomic Compare & Write Unit: 1 00:26:19.390 Fused Compare & Write: Not Supported 00:26:19.390 Scatter-Gather List 00:26:19.390 SGL Command Set: Supported 00:26:19.390 SGL Keyed: Not Supported 00:26:19.390 SGL Bit Bucket Descriptor: Not Supported 00:26:19.390 SGL Metadata Pointer: Not Supported 00:26:19.390 Oversized SGL: Not Supported 00:26:19.390 SGL Metadata Address: Not Supported 00:26:19.390 SGL Offset: Not Supported 00:26:19.390 Transport SGL Data Block: Not Supported 00:26:19.390 Replay Protected Memory Block: Not Supported 00:26:19.390 00:26:19.390 Firmware Slot Information 00:26:19.390 ========================= 00:26:19.390 Active slot: 1 00:26:19.390 Slot 1 Firmware Revision: 1.0 00:26:19.390 00:26:19.390 00:26:19.390 Commands Supported and Effects 00:26:19.390 ============================== 00:26:19.390 Admin Commands 00:26:19.390 -------------- 00:26:19.390 Delete I/O Submission Queue (00h): Supported 00:26:19.390 Create I/O Submission Queue (01h): Supported 00:26:19.390 Get Log Page (02h): Supported 00:26:19.390 Delete I/O Completion Queue (04h): Supported 00:26:19.390 Create I/O Completion Queue (05h): Supported 00:26:19.390 Identify (06h): Supported 00:26:19.390 Abort (08h): Supported 00:26:19.390 Set Features (09h): Supported 00:26:19.390 Get Features (0Ah): Supported 00:26:19.390 Asynchronous Event Request (0Ch): Supported 00:26:19.390 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.390 Directive Send (19h): Supported 00:26:19.390 Directive Receive (1Ah): Supported 00:26:19.390 Virtualization Management (1Ch): Supported 00:26:19.390 Doorbell Buffer Config (7Ch): Supported 00:26:19.390 Format NVM (80h): Supported LBA-Change 00:26:19.390 I/O Commands 00:26:19.390 ------------ 00:26:19.390 Flush (00h): Supported LBA-Change 00:26:19.390 Write (01h): Supported LBA-Change 00:26:19.390 Read (02h): Supported 00:26:19.390 Compare (05h): Supported 00:26:19.390 Write Zeroes (08h): Supported LBA-Change 00:26:19.390 Dataset Management (09h): Supported LBA-Change 00:26:19.390 Unknown (0Ch): Supported 00:26:19.390 Unknown (12h): Supported 00:26:19.390 Copy (19h): Supported LBA-Change 00:26:19.390 Unknown (1Dh): Supported LBA-Change 00:26:19.390 00:26:19.390 Error Log 00:26:19.390 ========= 00:26:19.390 00:26:19.390 Arbitration 00:26:19.390 =========== 00:26:19.390 Arbitration Burst: no limit 00:26:19.390 00:26:19.390 Power Management 00:26:19.390 ================ 00:26:19.390 Number of Power States: 1 00:26:19.390 Current Power State: Power State #0 00:26:19.390 Power State #0: 00:26:19.390 Max Power: 25.00 W 00:26:19.390 Non-Operational State: Operational 00:26:19.390 Entry Latency: 16 microseconds 00:26:19.390 Exit Latency: 4 microseconds 00:26:19.390 Relative Read Throughput: 0 00:26:19.390 Relative Read Latency: 0 00:26:19.390 Relative Write Throughput: 0 00:26:19.390 Relative Write Latency: 0 00:26:19.390 Idle Power: Not Reported 00:26:19.390 Active Power: Not Reported 00:26:19.390 Non-Operational Permissive Mode: Not Supported 00:26:19.390 00:26:19.390 Health Information 00:26:19.390 ================== 00:26:19.390 Critical Warnings: 00:26:19.390 Available Spare Space: OK 00:26:19.390 Temperature: OK 00:26:19.390 Device Reliability: OK 00:26:19.390 Read Only: No 00:26:19.390 Volatile Memory Backup: OK 00:26:19.390 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.390 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.390 Available Spare: 0% 00:26:19.390 Available Spare Threshold: 0% 00:26:19.390 Life Percentage Used: 0% 00:26:19.390 Data Units Read: 635 00:26:19.390 Data Units Written: 563 00:26:19.390 Host Read Commands: 32701 00:26:19.390 Host Write Commands: 32487 00:26:19.390 Controller Busy Time: 0 minutes 00:26:19.390 Power Cycles: 0 00:26:19.390 Power On Hours: 0 hours 00:26:19.390 Unsafe Shutdowns: 0 00:26:19.390 Unrecoverable Media Errors: 0 00:26:19.390 Lifetime Error Log Entries: 0 00:26:19.390 Warning Temperature Time: 0 minutes 00:26:19.390 Critical Temperature Time: 0 minutes 00:26:19.390 00:26:19.390 Number of Queues 00:26:19.390 ================ 00:26:19.390 Number of I/O Submission Queues: 64 00:26:19.390 Number of I/O Completion Queues: 64 00:26:19.390 00:26:19.390 ZNS Specific Controller Data 00:26:19.390 ============================ 00:26:19.390 Zone Append Size Limit: 0 00:26:19.390 00:26:19.390 00:26:19.390 Active Namespaces 00:26:19.390 ================= 00:26:19.390 Namespace ID:1 00:26:19.390 Error Recovery Timeout: Unlimited 00:26:19.390 Command Set Identifier: NVM (00h) 00:26:19.390 Deallocate: Supported 00:26:19.390 Deallocated/Unwritten Error: Supported 00:26:19.390 Deallocated Read Value: All 0x00 00:26:19.390 Deallocate in Write Zeroes: Not Supported 00:26:19.390 Deallocated Guard Field: 0xFFFF 00:26:19.390 Flush: Supported 00:26:19.390 Reservation: Not Supported 00:26:19.390 Metadata Transferred as: Separate Metadata Buffer 00:26:19.390 Namespace Sharing Capabilities: Private 00:26:19.390 Size (in LBAs): 1548666 (5GiB) 00:26:19.390 Capacity (in LBAs): 1548666 (5GiB) 00:26:19.390 Utilization (in LBAs): 1548666 (5GiB) 00:26:19.391 Thin Provisioning: Not Supported 00:26:19.391 Per-NS Atomic Units: No 00:26:19.391 Maximum Single Source Range Length: 128 00:26:19.391 Maximum Copy Length: 128 00:26:19.391 Maximum Source Range Count: 128 00:26:19.391 NGUID/EUI64 Never Reused: No 00:26:19.391 Namespace Write Protected: No 00:26:19.391 Number of LBA Formats: 8 00:26:19.391 Current LBA Format: LBA Format #07 00:26:19.391 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.391 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.391 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.391 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.391 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.391 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.391 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.391 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.391 00:26:19.391 NVM Specific Namespace Data 00:26:19.391 =========================== 00:26:19.391 Logical Block Storage Tag Mask: 0 00:26:19.391 Protection Information Capabilities: 00:26:19.391 16b Guard Protection Information Storage Tag Support: No 00:26:19.391 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.391 Storage Tag Check Read Support: No 00:26:19.391 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.391 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:19.391 06:52:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:26:19.650 ===================================================== 00:26:19.650 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:19.650 ===================================================== 00:26:19.650 Controller Capabilities/Features 00:26:19.650 ================================ 00:26:19.650 Vendor ID: 1b36 00:26:19.650 Subsystem Vendor ID: 1af4 00:26:19.650 Serial Number: 12341 00:26:19.650 Model Number: QEMU NVMe Ctrl 00:26:19.650 Firmware Version: 8.0.0 00:26:19.650 Recommended Arb Burst: 6 00:26:19.650 IEEE OUI Identifier: 00 54 52 00:26:19.650 Multi-path I/O 00:26:19.650 May have multiple subsystem ports: No 00:26:19.650 May have multiple controllers: No 00:26:19.650 Associated with SR-IOV VF: No 00:26:19.650 Max Data Transfer Size: 524288 00:26:19.650 Max Number of Namespaces: 256 00:26:19.650 Max Number of I/O Queues: 64 00:26:19.650 NVMe Specification Version (VS): 1.4 00:26:19.650 NVMe Specification Version (Identify): 1.4 00:26:19.650 Maximum Queue Entries: 2048 00:26:19.650 Contiguous Queues Required: Yes 00:26:19.650 Arbitration Mechanisms Supported 00:26:19.650 Weighted Round Robin: Not Supported 00:26:19.650 Vendor Specific: Not Supported 00:26:19.650 Reset Timeout: 7500 ms 00:26:19.650 Doorbell Stride: 4 bytes 00:26:19.650 NVM Subsystem Reset: Not Supported 00:26:19.650 Command Sets Supported 00:26:19.650 NVM Command Set: Supported 00:26:19.651 Boot Partition: Not Supported 00:26:19.651 Memory Page Size Minimum: 4096 bytes 00:26:19.651 Memory Page Size Maximum: 65536 bytes 00:26:19.651 Persistent Memory Region: Not Supported 00:26:19.651 Optional Asynchronous Events Supported 00:26:19.651 Namespace Attribute Notices: Supported 00:26:19.651 Firmware Activation Notices: Not Supported 00:26:19.651 ANA Change Notices: Not Supported 00:26:19.651 PLE Aggregate Log Change Notices: Not Supported 00:26:19.651 LBA Status Info Alert Notices: Not Supported 00:26:19.651 EGE Aggregate Log Change Notices: Not Supported 00:26:19.651 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.651 Zone Descriptor Change Notices: Not Supported 00:26:19.651 Discovery Log Change Notices: Not Supported 00:26:19.651 Controller Attributes 00:26:19.651 128-bit Host Identifier: Not Supported 00:26:19.651 Non-Operational Permissive Mode: Not Supported 00:26:19.651 NVM Sets: Not Supported 00:26:19.651 Read Recovery Levels: Not Supported 00:26:19.651 Endurance Groups: Not Supported 00:26:19.651 Predictable Latency Mode: Not Supported 00:26:19.651 Traffic Based Keep ALive: Not Supported 00:26:19.651 Namespace Granularity: Not Supported 00:26:19.651 SQ Associations: Not Supported 00:26:19.651 UUID List: Not Supported 00:26:19.651 Multi-Domain Subsystem: Not Supported 00:26:19.651 Fixed Capacity Management: Not Supported 00:26:19.651 Variable Capacity Management: Not Supported 00:26:19.651 Delete Endurance Group: Not Supported 00:26:19.651 Delete NVM Set: Not Supported 00:26:19.651 Extended LBA Formats Supported: Supported 00:26:19.651 Flexible Data Placement Supported: Not Supported 00:26:19.651 00:26:19.651 Controller Memory Buffer Support 00:26:19.651 ================================ 00:26:19.651 Supported: No 00:26:19.651 00:26:19.651 Persistent Memory Region Support 00:26:19.651 ================================ 00:26:19.651 Supported: No 00:26:19.651 00:26:19.651 Admin Command Set Attributes 00:26:19.651 ============================ 00:26:19.651 Security Send/Receive: Not Supported 00:26:19.651 Format NVM: Supported 00:26:19.651 Firmware Activate/Download: Not Supported 00:26:19.651 Namespace Management: Supported 00:26:19.651 Device Self-Test: Not Supported 00:26:19.651 Directives: Supported 00:26:19.651 NVMe-MI: Not Supported 00:26:19.651 Virtualization Management: Not Supported 00:26:19.651 Doorbell Buffer Config: Supported 00:26:19.651 Get LBA Status Capability: Not Supported 00:26:19.651 Command & Feature Lockdown Capability: Not Supported 00:26:19.651 Abort Command Limit: 4 00:26:19.651 Async Event Request Limit: 4 00:26:19.651 Number of Firmware Slots: N/A 00:26:19.651 Firmware Slot 1 Read-Only: N/A 00:26:19.651 Firmware Activation Without Reset: N/A 00:26:19.651 Multiple Update Detection Support: N/A 00:26:19.651 Firmware Update Granularity: No Information Provided 00:26:19.651 Per-Namespace SMART Log: Yes 00:26:19.651 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.651 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:26:19.651 Command Effects Log Page: Supported 00:26:19.651 Get Log Page Extended Data: Supported 00:26:19.651 Telemetry Log Pages: Not Supported 00:26:19.651 Persistent Event Log Pages: Not Supported 00:26:19.651 Supported Log Pages Log Page: May Support 00:26:19.651 Commands Supported & Effects Log Page: Not Supported 00:26:19.651 Feature Identifiers & Effects Log Page:May Support 00:26:19.651 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.651 Data Area 4 for Telemetry Log: Not Supported 00:26:19.651 Error Log Page Entries Supported: 1 00:26:19.651 Keep Alive: Not Supported 00:26:19.651 00:26:19.651 NVM Command Set Attributes 00:26:19.651 ========================== 00:26:19.651 Submission Queue Entry Size 00:26:19.651 Max: 64 00:26:19.651 Min: 64 00:26:19.651 Completion Queue Entry Size 00:26:19.651 Max: 16 00:26:19.651 Min: 16 00:26:19.651 Number of Namespaces: 256 00:26:19.651 Compare Command: Supported 00:26:19.651 Write Uncorrectable Command: Not Supported 00:26:19.651 Dataset Management Command: Supported 00:26:19.651 Write Zeroes Command: Supported 00:26:19.651 Set Features Save Field: Supported 00:26:19.651 Reservations: Not Supported 00:26:19.651 Timestamp: Supported 00:26:19.651 Copy: Supported 00:26:19.651 Volatile Write Cache: Present 00:26:19.651 Atomic Write Unit (Normal): 1 00:26:19.651 Atomic Write Unit (PFail): 1 00:26:19.651 Atomic Compare & Write Unit: 1 00:26:19.651 Fused Compare & Write: Not Supported 00:26:19.651 Scatter-Gather List 00:26:19.651 SGL Command Set: Supported 00:26:19.651 SGL Keyed: Not Supported 00:26:19.651 SGL Bit Bucket Descriptor: Not Supported 00:26:19.651 SGL Metadata Pointer: Not Supported 00:26:19.651 Oversized SGL: Not Supported 00:26:19.651 SGL Metadata Address: Not Supported 00:26:19.651 SGL Offset: Not Supported 00:26:19.651 Transport SGL Data Block: Not Supported 00:26:19.651 Replay Protected Memory Block: Not Supported 00:26:19.651 00:26:19.651 Firmware Slot Information 00:26:19.651 ========================= 00:26:19.651 Active slot: 1 00:26:19.651 Slot 1 Firmware Revision: 1.0 00:26:19.651 00:26:19.651 00:26:19.651 Commands Supported and Effects 00:26:19.651 ============================== 00:26:19.651 Admin Commands 00:26:19.651 -------------- 00:26:19.651 Delete I/O Submission Queue (00h): Supported 00:26:19.651 Create I/O Submission Queue (01h): Supported 00:26:19.651 Get Log Page (02h): Supported 00:26:19.651 Delete I/O Completion Queue (04h): Supported 00:26:19.651 Create I/O Completion Queue (05h): Supported 00:26:19.651 Identify (06h): Supported 00:26:19.651 Abort (08h): Supported 00:26:19.651 Set Features (09h): Supported 00:26:19.651 Get Features (0Ah): Supported 00:26:19.651 Asynchronous Event Request (0Ch): Supported 00:26:19.651 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.651 Directive Send (19h): Supported 00:26:19.651 Directive Receive (1Ah): Supported 00:26:19.651 Virtualization Management (1Ch): Supported 00:26:19.651 Doorbell Buffer Config (7Ch): Supported 00:26:19.651 Format NVM (80h): Supported LBA-Change 00:26:19.651 I/O Commands 00:26:19.651 ------------ 00:26:19.651 Flush (00h): Supported LBA-Change 00:26:19.651 Write (01h): Supported LBA-Change 00:26:19.651 Read (02h): Supported 00:26:19.651 Compare (05h): Supported 00:26:19.651 Write Zeroes (08h): Supported LBA-Change 00:26:19.651 Dataset Management (09h): Supported LBA-Change 00:26:19.651 Unknown (0Ch): Supported 00:26:19.651 Unknown (12h): Supported 00:26:19.651 Copy (19h): Supported LBA-Change 00:26:19.651 Unknown (1Dh): Supported LBA-Change 00:26:19.651 00:26:19.651 Error Log 00:26:19.651 ========= 00:26:19.651 00:26:19.651 Arbitration 00:26:19.651 =========== 00:26:19.651 Arbitration Burst: no limit 00:26:19.651 00:26:19.651 Power Management 00:26:19.651 ================ 00:26:19.651 Number of Power States: 1 00:26:19.651 Current Power State: Power State #0 00:26:19.651 Power State #0: 00:26:19.651 Max Power: 25.00 W 00:26:19.651 Non-Operational State: Operational 00:26:19.651 Entry Latency: 16 microseconds 00:26:19.651 Exit Latency: 4 microseconds 00:26:19.651 Relative Read Throughput: 0 00:26:19.651 Relative Read Latency: 0 00:26:19.651 Relative Write Throughput: 0 00:26:19.651 Relative Write Latency: 0 00:26:19.651 Idle Power: Not Reported 00:26:19.651 Active Power: Not Reported 00:26:19.651 Non-Operational Permissive Mode: Not Supported 00:26:19.651 00:26:19.651 Health Information 00:26:19.651 ================== 00:26:19.651 Critical Warnings: 00:26:19.651 Available Spare Space: OK 00:26:19.651 Temperature: OK 00:26:19.651 Device Reliability: OK 00:26:19.651 Read Only: No 00:26:19.651 Volatile Memory Backup: OK 00:26:19.651 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.651 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.651 Available Spare: 0% 00:26:19.651 Available Spare Threshold: 0% 00:26:19.651 Life Percentage Used: 0% 00:26:19.651 Data Units Read: 943 00:26:19.651 Data Units Written: 810 00:26:19.651 Host Read Commands: 48497 00:26:19.651 Host Write Commands: 47258 00:26:19.651 Controller Busy Time: 0 minutes 00:26:19.651 Power Cycles: 0 00:26:19.651 Power On Hours: 0 hours 00:26:19.651 Unsafe Shutdowns: 0 00:26:19.651 Unrecoverable Media Errors: 0 00:26:19.651 Lifetime Error Log Entries: 0 00:26:19.651 Warning Temperature Time: 0 minutes 00:26:19.651 Critical Temperature Time: 0 minutes 00:26:19.651 00:26:19.651 Number of Queues 00:26:19.651 ================ 00:26:19.651 Number of I/O Submission Queues: 64 00:26:19.651 Number of I/O Completion Queues: 64 00:26:19.651 00:26:19.651 ZNS Specific Controller Data 00:26:19.652 ============================ 00:26:19.652 Zone Append Size Limit: 0 00:26:19.652 00:26:19.652 00:26:19.652 Active Namespaces 00:26:19.652 ================= 00:26:19.652 Namespace ID:1 00:26:19.652 Error Recovery Timeout: Unlimited 00:26:19.652 Command Set Identifier: NVM (00h) 00:26:19.652 Deallocate: Supported 00:26:19.652 Deallocated/Unwritten Error: Supported 00:26:19.652 Deallocated Read Value: All 0x00 00:26:19.652 Deallocate in Write Zeroes: Not Supported 00:26:19.652 Deallocated Guard Field: 0xFFFF 00:26:19.652 Flush: Supported 00:26:19.652 Reservation: Not Supported 00:26:19.652 Namespace Sharing Capabilities: Private 00:26:19.652 Size (in LBAs): 1310720 (5GiB) 00:26:19.652 Capacity (in LBAs): 1310720 (5GiB) 00:26:19.652 Utilization (in LBAs): 1310720 (5GiB) 00:26:19.652 Thin Provisioning: Not Supported 00:26:19.652 Per-NS Atomic Units: No 00:26:19.652 Maximum Single Source Range Length: 128 00:26:19.652 Maximum Copy Length: 128 00:26:19.652 Maximum Source Range Count: 128 00:26:19.652 NGUID/EUI64 Never Reused: No 00:26:19.652 Namespace Write Protected: No 00:26:19.652 Number of LBA Formats: 8 00:26:19.652 Current LBA Format: LBA Format #04 00:26:19.652 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.652 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.652 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.652 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.652 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.652 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.652 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.652 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.652 00:26:19.652 NVM Specific Namespace Data 00:26:19.652 =========================== 00:26:19.652 Logical Block Storage Tag Mask: 0 00:26:19.652 Protection Information Capabilities: 00:26:19.652 16b Guard Protection Information Storage Tag Support: No 00:26:19.652 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.652 Storage Tag Check Read Support: No 00:26:19.652 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.652 06:52:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:19.652 06:52:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:26:19.911 ===================================================== 00:26:19.911 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:19.911 ===================================================== 00:26:19.911 Controller Capabilities/Features 00:26:19.911 ================================ 00:26:19.911 Vendor ID: 1b36 00:26:19.911 Subsystem Vendor ID: 1af4 00:26:19.911 Serial Number: 12342 00:26:19.911 Model Number: QEMU NVMe Ctrl 00:26:19.911 Firmware Version: 8.0.0 00:26:19.911 Recommended Arb Burst: 6 00:26:19.911 IEEE OUI Identifier: 00 54 52 00:26:19.911 Multi-path I/O 00:26:19.911 May have multiple subsystem ports: No 00:26:19.911 May have multiple controllers: No 00:26:19.911 Associated with SR-IOV VF: No 00:26:19.911 Max Data Transfer Size: 524288 00:26:19.911 Max Number of Namespaces: 256 00:26:19.911 Max Number of I/O Queues: 64 00:26:19.911 NVMe Specification Version (VS): 1.4 00:26:19.911 NVMe Specification Version (Identify): 1.4 00:26:19.911 Maximum Queue Entries: 2048 00:26:19.911 Contiguous Queues Required: Yes 00:26:19.911 Arbitration Mechanisms Supported 00:26:19.911 Weighted Round Robin: Not Supported 00:26:19.911 Vendor Specific: Not Supported 00:26:19.911 Reset Timeout: 7500 ms 00:26:19.911 Doorbell Stride: 4 bytes 00:26:19.911 NVM Subsystem Reset: Not Supported 00:26:19.911 Command Sets Supported 00:26:19.911 NVM Command Set: Supported 00:26:19.911 Boot Partition: Not Supported 00:26:19.911 Memory Page Size Minimum: 4096 bytes 00:26:19.911 Memory Page Size Maximum: 65536 bytes 00:26:19.911 Persistent Memory Region: Not Supported 00:26:19.911 Optional Asynchronous Events Supported 00:26:19.911 Namespace Attribute Notices: Supported 00:26:19.911 Firmware Activation Notices: Not Supported 00:26:19.911 ANA Change Notices: Not Supported 00:26:19.911 PLE Aggregate Log Change Notices: Not Supported 00:26:19.911 LBA Status Info Alert Notices: Not Supported 00:26:19.911 EGE Aggregate Log Change Notices: Not Supported 00:26:19.911 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.911 Zone Descriptor Change Notices: Not Supported 00:26:19.911 Discovery Log Change Notices: Not Supported 00:26:19.911 Controller Attributes 00:26:19.911 128-bit Host Identifier: Not Supported 00:26:19.911 Non-Operational Permissive Mode: Not Supported 00:26:19.911 NVM Sets: Not Supported 00:26:19.911 Read Recovery Levels: Not Supported 00:26:19.911 Endurance Groups: Not Supported 00:26:19.911 Predictable Latency Mode: Not Supported 00:26:19.911 Traffic Based Keep ALive: Not Supported 00:26:19.911 Namespace Granularity: Not Supported 00:26:19.911 SQ Associations: Not Supported 00:26:19.911 UUID List: Not Supported 00:26:19.911 Multi-Domain Subsystem: Not Supported 00:26:19.911 Fixed Capacity Management: Not Supported 00:26:19.911 Variable Capacity Management: Not Supported 00:26:19.911 Delete Endurance Group: Not Supported 00:26:19.911 Delete NVM Set: Not Supported 00:26:19.911 Extended LBA Formats Supported: Supported 00:26:19.911 Flexible Data Placement Supported: Not Supported 00:26:19.911 00:26:19.911 Controller Memory Buffer Support 00:26:19.911 ================================ 00:26:19.911 Supported: No 00:26:19.911 00:26:19.911 Persistent Memory Region Support 00:26:19.911 ================================ 00:26:19.911 Supported: No 00:26:19.911 00:26:19.911 Admin Command Set Attributes 00:26:19.911 ============================ 00:26:19.911 Security Send/Receive: Not Supported 00:26:19.911 Format NVM: Supported 00:26:19.911 Firmware Activate/Download: Not Supported 00:26:19.911 Namespace Management: Supported 00:26:19.911 Device Self-Test: Not Supported 00:26:19.911 Directives: Supported 00:26:19.912 NVMe-MI: Not Supported 00:26:19.912 Virtualization Management: Not Supported 00:26:19.912 Doorbell Buffer Config: Supported 00:26:19.912 Get LBA Status Capability: Not Supported 00:26:19.912 Command & Feature Lockdown Capability: Not Supported 00:26:19.912 Abort Command Limit: 4 00:26:19.912 Async Event Request Limit: 4 00:26:19.912 Number of Firmware Slots: N/A 00:26:19.912 Firmware Slot 1 Read-Only: N/A 00:26:19.912 Firmware Activation Without Reset: N/A 00:26:19.912 Multiple Update Detection Support: N/A 00:26:19.912 Firmware Update Granularity: No Information Provided 00:26:19.912 Per-Namespace SMART Log: Yes 00:26:19.912 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.912 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:26:19.912 Command Effects Log Page: Supported 00:26:19.912 Get Log Page Extended Data: Supported 00:26:19.912 Telemetry Log Pages: Not Supported 00:26:19.912 Persistent Event Log Pages: Not Supported 00:26:19.912 Supported Log Pages Log Page: May Support 00:26:19.912 Commands Supported & Effects Log Page: Not Supported 00:26:19.912 Feature Identifiers & Effects Log Page:May Support 00:26:19.912 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.912 Data Area 4 for Telemetry Log: Not Supported 00:26:19.912 Error Log Page Entries Supported: 1 00:26:19.912 Keep Alive: Not Supported 00:26:19.912 00:26:19.912 NVM Command Set Attributes 00:26:19.912 ========================== 00:26:19.912 Submission Queue Entry Size 00:26:19.912 Max: 64 00:26:19.912 Min: 64 00:26:19.912 Completion Queue Entry Size 00:26:19.912 Max: 16 00:26:19.912 Min: 16 00:26:19.912 Number of Namespaces: 256 00:26:19.912 Compare Command: Supported 00:26:19.912 Write Uncorrectable Command: Not Supported 00:26:19.912 Dataset Management Command: Supported 00:26:19.912 Write Zeroes Command: Supported 00:26:19.912 Set Features Save Field: Supported 00:26:19.912 Reservations: Not Supported 00:26:19.912 Timestamp: Supported 00:26:19.912 Copy: Supported 00:26:19.912 Volatile Write Cache: Present 00:26:19.912 Atomic Write Unit (Normal): 1 00:26:19.912 Atomic Write Unit (PFail): 1 00:26:19.912 Atomic Compare & Write Unit: 1 00:26:19.912 Fused Compare & Write: Not Supported 00:26:19.912 Scatter-Gather List 00:26:19.912 SGL Command Set: Supported 00:26:19.912 SGL Keyed: Not Supported 00:26:19.912 SGL Bit Bucket Descriptor: Not Supported 00:26:19.912 SGL Metadata Pointer: Not Supported 00:26:19.912 Oversized SGL: Not Supported 00:26:19.912 SGL Metadata Address: Not Supported 00:26:19.912 SGL Offset: Not Supported 00:26:19.912 Transport SGL Data Block: Not Supported 00:26:19.912 Replay Protected Memory Block: Not Supported 00:26:19.912 00:26:19.912 Firmware Slot Information 00:26:19.912 ========================= 00:26:19.912 Active slot: 1 00:26:19.912 Slot 1 Firmware Revision: 1.0 00:26:19.912 00:26:19.912 00:26:19.912 Commands Supported and Effects 00:26:19.912 ============================== 00:26:19.912 Admin Commands 00:26:19.912 -------------- 00:26:19.912 Delete I/O Submission Queue (00h): Supported 00:26:19.912 Create I/O Submission Queue (01h): Supported 00:26:19.912 Get Log Page (02h): Supported 00:26:19.912 Delete I/O Completion Queue (04h): Supported 00:26:19.912 Create I/O Completion Queue (05h): Supported 00:26:19.912 Identify (06h): Supported 00:26:19.912 Abort (08h): Supported 00:26:19.912 Set Features (09h): Supported 00:26:19.912 Get Features (0Ah): Supported 00:26:19.912 Asynchronous Event Request (0Ch): Supported 00:26:19.912 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:19.912 Directive Send (19h): Supported 00:26:19.912 Directive Receive (1Ah): Supported 00:26:19.912 Virtualization Management (1Ch): Supported 00:26:19.912 Doorbell Buffer Config (7Ch): Supported 00:26:19.912 Format NVM (80h): Supported LBA-Change 00:26:19.912 I/O Commands 00:26:19.912 ------------ 00:26:19.912 Flush (00h): Supported LBA-Change 00:26:19.912 Write (01h): Supported LBA-Change 00:26:19.912 Read (02h): Supported 00:26:19.912 Compare (05h): Supported 00:26:19.912 Write Zeroes (08h): Supported LBA-Change 00:26:19.912 Dataset Management (09h): Supported LBA-Change 00:26:19.912 Unknown (0Ch): Supported 00:26:19.912 Unknown (12h): Supported 00:26:19.912 Copy (19h): Supported LBA-Change 00:26:19.912 Unknown (1Dh): Supported LBA-Change 00:26:19.912 00:26:19.912 Error Log 00:26:19.912 ========= 00:26:19.912 00:26:19.912 Arbitration 00:26:19.912 =========== 00:26:19.912 Arbitration Burst: no limit 00:26:19.912 00:26:19.912 Power Management 00:26:19.912 ================ 00:26:19.912 Number of Power States: 1 00:26:19.912 Current Power State: Power State #0 00:26:19.912 Power State #0: 00:26:19.912 Max Power: 25.00 W 00:26:19.912 Non-Operational State: Operational 00:26:19.912 Entry Latency: 16 microseconds 00:26:19.912 Exit Latency: 4 microseconds 00:26:19.912 Relative Read Throughput: 0 00:26:19.912 Relative Read Latency: 0 00:26:19.912 Relative Write Throughput: 0 00:26:19.912 Relative Write Latency: 0 00:26:19.912 Idle Power: Not Reported 00:26:19.912 Active Power: Not Reported 00:26:19.912 Non-Operational Permissive Mode: Not Supported 00:26:19.912 00:26:19.912 Health Information 00:26:19.912 ================== 00:26:19.912 Critical Warnings: 00:26:19.912 Available Spare Space: OK 00:26:19.912 Temperature: OK 00:26:19.912 Device Reliability: OK 00:26:19.912 Read Only: No 00:26:19.912 Volatile Memory Backup: OK 00:26:19.912 Current Temperature: 323 Kelvin (50 Celsius) 00:26:19.912 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:19.912 Available Spare: 0% 00:26:19.912 Available Spare Threshold: 0% 00:26:19.912 Life Percentage Used: 0% 00:26:19.912 Data Units Read: 2046 00:26:19.912 Data Units Written: 1833 00:26:19.912 Host Read Commands: 99981 00:26:19.912 Host Write Commands: 98250 00:26:19.912 Controller Busy Time: 0 minutes 00:26:19.912 Power Cycles: 0 00:26:19.912 Power On Hours: 0 hours 00:26:19.912 Unsafe Shutdowns: 0 00:26:19.912 Unrecoverable Media Errors: 0 00:26:19.912 Lifetime Error Log Entries: 0 00:26:19.912 Warning Temperature Time: 0 minutes 00:26:19.912 Critical Temperature Time: 0 minutes 00:26:19.912 00:26:19.912 Number of Queues 00:26:19.912 ================ 00:26:19.912 Number of I/O Submission Queues: 64 00:26:19.912 Number of I/O Completion Queues: 64 00:26:19.912 00:26:19.912 ZNS Specific Controller Data 00:26:19.912 ============================ 00:26:19.912 Zone Append Size Limit: 0 00:26:19.912 00:26:19.912 00:26:19.912 Active Namespaces 00:26:19.912 ================= 00:26:19.912 Namespace ID:1 00:26:19.912 Error Recovery Timeout: Unlimited 00:26:19.912 Command Set Identifier: NVM (00h) 00:26:19.912 Deallocate: Supported 00:26:19.913 Deallocated/Unwritten Error: Supported 00:26:19.913 Deallocated Read Value: All 0x00 00:26:19.913 Deallocate in Write Zeroes: Not Supported 00:26:19.913 Deallocated Guard Field: 0xFFFF 00:26:19.913 Flush: Supported 00:26:19.913 Reservation: Not Supported 00:26:19.913 Namespace Sharing Capabilities: Private 00:26:19.913 Size (in LBAs): 1048576 (4GiB) 00:26:19.913 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.913 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.913 Thin Provisioning: Not Supported 00:26:19.913 Per-NS Atomic Units: No 00:26:19.913 Maximum Single Source Range Length: 128 00:26:19.913 Maximum Copy Length: 128 00:26:19.913 Maximum Source Range Count: 128 00:26:19.913 NGUID/EUI64 Never Reused: No 00:26:19.913 Namespace Write Protected: No 00:26:19.913 Number of LBA Formats: 8 00:26:19.913 Current LBA Format: LBA Format #04 00:26:19.913 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.913 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.913 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.913 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.913 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.913 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.913 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.913 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.913 00:26:19.913 NVM Specific Namespace Data 00:26:19.913 =========================== 00:26:19.913 Logical Block Storage Tag Mask: 0 00:26:19.913 Protection Information Capabilities: 00:26:19.913 16b Guard Protection Information Storage Tag Support: No 00:26:19.913 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.913 Storage Tag Check Read Support: No 00:26:19.913 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Namespace ID:2 00:26:19.913 Error Recovery Timeout: Unlimited 00:26:19.913 Command Set Identifier: NVM (00h) 00:26:19.913 Deallocate: Supported 00:26:19.913 Deallocated/Unwritten Error: Supported 00:26:19.913 Deallocated Read Value: All 0x00 00:26:19.913 Deallocate in Write Zeroes: Not Supported 00:26:19.913 Deallocated Guard Field: 0xFFFF 00:26:19.913 Flush: Supported 00:26:19.913 Reservation: Not Supported 00:26:19.913 Namespace Sharing Capabilities: Private 00:26:19.913 Size (in LBAs): 1048576 (4GiB) 00:26:19.913 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.913 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.913 Thin Provisioning: Not Supported 00:26:19.913 Per-NS Atomic Units: No 00:26:19.913 Maximum Single Source Range Length: 128 00:26:19.913 Maximum Copy Length: 128 00:26:19.913 Maximum Source Range Count: 128 00:26:19.913 NGUID/EUI64 Never Reused: No 00:26:19.913 Namespace Write Protected: No 00:26:19.913 Number of LBA Formats: 8 00:26:19.913 Current LBA Format: LBA Format #04 00:26:19.913 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.913 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.913 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.913 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.913 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.913 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.913 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.913 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.913 00:26:19.913 NVM Specific Namespace Data 00:26:19.913 =========================== 00:26:19.913 Logical Block Storage Tag Mask: 0 00:26:19.913 Protection Information Capabilities: 00:26:19.913 16b Guard Protection Information Storage Tag Support: No 00:26:19.913 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:19.913 Storage Tag Check Read Support: No 00:26:19.913 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:19.913 Namespace ID:3 00:26:19.913 Error Recovery Timeout: Unlimited 00:26:19.913 Command Set Identifier: NVM (00h) 00:26:19.913 Deallocate: Supported 00:26:19.913 Deallocated/Unwritten Error: Supported 00:26:19.913 Deallocated Read Value: All 0x00 00:26:19.913 Deallocate in Write Zeroes: Not Supported 00:26:19.913 Deallocated Guard Field: 0xFFFF 00:26:19.913 Flush: Supported 00:26:19.913 Reservation: Not Supported 00:26:19.913 Namespace Sharing Capabilities: Private 00:26:19.913 Size (in LBAs): 1048576 (4GiB) 00:26:19.913 Capacity (in LBAs): 1048576 (4GiB) 00:26:19.913 Utilization (in LBAs): 1048576 (4GiB) 00:26:19.913 Thin Provisioning: Not Supported 00:26:19.913 Per-NS Atomic Units: No 00:26:19.913 Maximum Single Source Range Length: 128 00:26:19.913 Maximum Copy Length: 128 00:26:19.913 Maximum Source Range Count: 128 00:26:19.913 NGUID/EUI64 Never Reused: No 00:26:19.913 Namespace Write Protected: No 00:26:19.913 Number of LBA Formats: 8 00:26:19.913 Current LBA Format: LBA Format #04 00:26:19.913 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.913 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:19.913 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:19.913 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:19.913 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:19.913 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:19.913 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:19.913 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:19.913 00:26:19.913 NVM Specific Namespace Data 00:26:19.913 =========================== 00:26:19.913 Logical Block Storage Tag Mask: 0 00:26:19.913 Protection Information Capabilities: 00:26:19.913 16b Guard Protection Information Storage Tag Support: No 00:26:19.913 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:20.172 Storage Tag Check Read Support: No 00:26:20.172 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.172 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.172 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.172 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.173 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.173 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.173 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.173 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.173 06:52:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:20.173 06:52:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:26:20.432 ===================================================== 00:26:20.432 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:20.432 ===================================================== 00:26:20.432 Controller Capabilities/Features 00:26:20.432 ================================ 00:26:20.432 Vendor ID: 1b36 00:26:20.432 Subsystem Vendor ID: 1af4 00:26:20.432 Serial Number: 12343 00:26:20.432 Model Number: QEMU NVMe Ctrl 00:26:20.432 Firmware Version: 8.0.0 00:26:20.432 Recommended Arb Burst: 6 00:26:20.432 IEEE OUI Identifier: 00 54 52 00:26:20.432 Multi-path I/O 00:26:20.432 May have multiple subsystem ports: No 00:26:20.432 May have multiple controllers: Yes 00:26:20.432 Associated with SR-IOV VF: No 00:26:20.432 Max Data Transfer Size: 524288 00:26:20.432 Max Number of Namespaces: 256 00:26:20.432 Max Number of I/O Queues: 64 00:26:20.432 NVMe Specification Version (VS): 1.4 00:26:20.432 NVMe Specification Version (Identify): 1.4 00:26:20.432 Maximum Queue Entries: 2048 00:26:20.432 Contiguous Queues Required: Yes 00:26:20.432 Arbitration Mechanisms Supported 00:26:20.432 Weighted Round Robin: Not Supported 00:26:20.432 Vendor Specific: Not Supported 00:26:20.432 Reset Timeout: 7500 ms 00:26:20.432 Doorbell Stride: 4 bytes 00:26:20.432 NVM Subsystem Reset: Not Supported 00:26:20.432 Command Sets Supported 00:26:20.432 NVM Command Set: Supported 00:26:20.432 Boot Partition: Not Supported 00:26:20.432 Memory Page Size Minimum: 4096 bytes 00:26:20.432 Memory Page Size Maximum: 65536 bytes 00:26:20.432 Persistent Memory Region: Not Supported 00:26:20.432 Optional Asynchronous Events Supported 00:26:20.432 Namespace Attribute Notices: Supported 00:26:20.432 Firmware Activation Notices: Not Supported 00:26:20.432 ANA Change Notices: Not Supported 00:26:20.432 PLE Aggregate Log Change Notices: Not Supported 00:26:20.432 LBA Status Info Alert Notices: Not Supported 00:26:20.432 EGE Aggregate Log Change Notices: Not Supported 00:26:20.432 Normal NVM Subsystem Shutdown event: Not Supported 00:26:20.432 Zone Descriptor Change Notices: Not Supported 00:26:20.432 Discovery Log Change Notices: Not Supported 00:26:20.432 Controller Attributes 00:26:20.432 128-bit Host Identifier: Not Supported 00:26:20.432 Non-Operational Permissive Mode: Not Supported 00:26:20.432 NVM Sets: Not Supported 00:26:20.432 Read Recovery Levels: Not Supported 00:26:20.432 Endurance Groups: Supported 00:26:20.432 Predictable Latency Mode: Not Supported 00:26:20.432 Traffic Based Keep ALive: Not Supported 00:26:20.432 Namespace Granularity: Not Supported 00:26:20.432 SQ Associations: Not Supported 00:26:20.432 UUID List: Not Supported 00:26:20.432 Multi-Domain Subsystem: Not Supported 00:26:20.432 Fixed Capacity Management: Not Supported 00:26:20.432 Variable Capacity Management: Not Supported 00:26:20.432 Delete Endurance Group: Not Supported 00:26:20.432 Delete NVM Set: Not Supported 00:26:20.432 Extended LBA Formats Supported: Supported 00:26:20.432 Flexible Data Placement Supported: Supported 00:26:20.432 00:26:20.432 Controller Memory Buffer Support 00:26:20.432 ================================ 00:26:20.432 Supported: No 00:26:20.432 00:26:20.432 Persistent Memory Region Support 00:26:20.432 ================================ 00:26:20.432 Supported: No 00:26:20.432 00:26:20.432 Admin Command Set Attributes 00:26:20.432 ============================ 00:26:20.432 Security Send/Receive: Not Supported 00:26:20.432 Format NVM: Supported 00:26:20.432 Firmware Activate/Download: Not Supported 00:26:20.432 Namespace Management: Supported 00:26:20.432 Device Self-Test: Not Supported 00:26:20.433 Directives: Supported 00:26:20.433 NVMe-MI: Not Supported 00:26:20.433 Virtualization Management: Not Supported 00:26:20.433 Doorbell Buffer Config: Supported 00:26:20.433 Get LBA Status Capability: Not Supported 00:26:20.433 Command & Feature Lockdown Capability: Not Supported 00:26:20.433 Abort Command Limit: 4 00:26:20.433 Async Event Request Limit: 4 00:26:20.433 Number of Firmware Slots: N/A 00:26:20.433 Firmware Slot 1 Read-Only: N/A 00:26:20.433 Firmware Activation Without Reset: N/A 00:26:20.433 Multiple Update Detection Support: N/A 00:26:20.433 Firmware Update Granularity: No Information Provided 00:26:20.433 Per-Namespace SMART Log: Yes 00:26:20.433 Asymmetric Namespace Access Log Page: Not Supported 00:26:20.433 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:26:20.433 Command Effects Log Page: Supported 00:26:20.433 Get Log Page Extended Data: Supported 00:26:20.433 Telemetry Log Pages: Not Supported 00:26:20.433 Persistent Event Log Pages: Not Supported 00:26:20.433 Supported Log Pages Log Page: May Support 00:26:20.433 Commands Supported & Effects Log Page: Not Supported 00:26:20.433 Feature Identifiers & Effects Log Page:May Support 00:26:20.433 NVMe-MI Commands & Effects Log Page: May Support 00:26:20.433 Data Area 4 for Telemetry Log: Not Supported 00:26:20.433 Error Log Page Entries Supported: 1 00:26:20.433 Keep Alive: Not Supported 00:26:20.433 00:26:20.433 NVM Command Set Attributes 00:26:20.433 ========================== 00:26:20.433 Submission Queue Entry Size 00:26:20.433 Max: 64 00:26:20.433 Min: 64 00:26:20.433 Completion Queue Entry Size 00:26:20.433 Max: 16 00:26:20.433 Min: 16 00:26:20.433 Number of Namespaces: 256 00:26:20.433 Compare Command: Supported 00:26:20.433 Write Uncorrectable Command: Not Supported 00:26:20.433 Dataset Management Command: Supported 00:26:20.433 Write Zeroes Command: Supported 00:26:20.433 Set Features Save Field: Supported 00:26:20.433 Reservations: Not Supported 00:26:20.433 Timestamp: Supported 00:26:20.433 Copy: Supported 00:26:20.433 Volatile Write Cache: Present 00:26:20.433 Atomic Write Unit (Normal): 1 00:26:20.433 Atomic Write Unit (PFail): 1 00:26:20.433 Atomic Compare & Write Unit: 1 00:26:20.433 Fused Compare & Write: Not Supported 00:26:20.433 Scatter-Gather List 00:26:20.433 SGL Command Set: Supported 00:26:20.433 SGL Keyed: Not Supported 00:26:20.433 SGL Bit Bucket Descriptor: Not Supported 00:26:20.433 SGL Metadata Pointer: Not Supported 00:26:20.433 Oversized SGL: Not Supported 00:26:20.433 SGL Metadata Address: Not Supported 00:26:20.433 SGL Offset: Not Supported 00:26:20.433 Transport SGL Data Block: Not Supported 00:26:20.433 Replay Protected Memory Block: Not Supported 00:26:20.433 00:26:20.433 Firmware Slot Information 00:26:20.433 ========================= 00:26:20.433 Active slot: 1 00:26:20.433 Slot 1 Firmware Revision: 1.0 00:26:20.433 00:26:20.433 00:26:20.433 Commands Supported and Effects 00:26:20.433 ============================== 00:26:20.433 Admin Commands 00:26:20.433 -------------- 00:26:20.433 Delete I/O Submission Queue (00h): Supported 00:26:20.433 Create I/O Submission Queue (01h): Supported 00:26:20.433 Get Log Page (02h): Supported 00:26:20.433 Delete I/O Completion Queue (04h): Supported 00:26:20.433 Create I/O Completion Queue (05h): Supported 00:26:20.433 Identify (06h): Supported 00:26:20.433 Abort (08h): Supported 00:26:20.433 Set Features (09h): Supported 00:26:20.433 Get Features (0Ah): Supported 00:26:20.433 Asynchronous Event Request (0Ch): Supported 00:26:20.433 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:20.433 Directive Send (19h): Supported 00:26:20.433 Directive Receive (1Ah): Supported 00:26:20.433 Virtualization Management (1Ch): Supported 00:26:20.433 Doorbell Buffer Config (7Ch): Supported 00:26:20.433 Format NVM (80h): Supported LBA-Change 00:26:20.433 I/O Commands 00:26:20.433 ------------ 00:26:20.433 Flush (00h): Supported LBA-Change 00:26:20.433 Write (01h): Supported LBA-Change 00:26:20.433 Read (02h): Supported 00:26:20.433 Compare (05h): Supported 00:26:20.433 Write Zeroes (08h): Supported LBA-Change 00:26:20.433 Dataset Management (09h): Supported LBA-Change 00:26:20.433 Unknown (0Ch): Supported 00:26:20.433 Unknown (12h): Supported 00:26:20.433 Copy (19h): Supported LBA-Change 00:26:20.433 Unknown (1Dh): Supported LBA-Change 00:26:20.433 00:26:20.433 Error Log 00:26:20.433 ========= 00:26:20.433 00:26:20.433 Arbitration 00:26:20.433 =========== 00:26:20.433 Arbitration Burst: no limit 00:26:20.433 00:26:20.433 Power Management 00:26:20.433 ================ 00:26:20.433 Number of Power States: 1 00:26:20.433 Current Power State: Power State #0 00:26:20.433 Power State #0: 00:26:20.433 Max Power: 25.00 W 00:26:20.433 Non-Operational State: Operational 00:26:20.433 Entry Latency: 16 microseconds 00:26:20.433 Exit Latency: 4 microseconds 00:26:20.433 Relative Read Throughput: 0 00:26:20.433 Relative Read Latency: 0 00:26:20.433 Relative Write Throughput: 0 00:26:20.433 Relative Write Latency: 0 00:26:20.433 Idle Power: Not Reported 00:26:20.433 Active Power: Not Reported 00:26:20.433 Non-Operational Permissive Mode: Not Supported 00:26:20.433 00:26:20.433 Health Information 00:26:20.433 ================== 00:26:20.433 Critical Warnings: 00:26:20.433 Available Spare Space: OK 00:26:20.433 Temperature: OK 00:26:20.433 Device Reliability: OK 00:26:20.433 Read Only: No 00:26:20.433 Volatile Memory Backup: OK 00:26:20.433 Current Temperature: 323 Kelvin (50 Celsius) 00:26:20.433 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:20.433 Available Spare: 0% 00:26:20.433 Available Spare Threshold: 0% 00:26:20.433 Life Percentage Used: 0% 00:26:20.433 Data Units Read: 769 00:26:20.433 Data Units Written: 699 00:26:20.433 Host Read Commands: 34109 00:26:20.433 Host Write Commands: 33532 00:26:20.433 Controller Busy Time: 0 minutes 00:26:20.433 Power Cycles: 0 00:26:20.433 Power On Hours: 0 hours 00:26:20.433 Unsafe Shutdowns: 0 00:26:20.433 Unrecoverable Media Errors: 0 00:26:20.433 Lifetime Error Log Entries: 0 00:26:20.433 Warning Temperature Time: 0 minutes 00:26:20.433 Critical Temperature Time: 0 minutes 00:26:20.433 00:26:20.433 Number of Queues 00:26:20.433 ================ 00:26:20.433 Number of I/O Submission Queues: 64 00:26:20.433 Number of I/O Completion Queues: 64 00:26:20.433 00:26:20.433 ZNS Specific Controller Data 00:26:20.433 ============================ 00:26:20.433 Zone Append Size Limit: 0 00:26:20.433 00:26:20.433 00:26:20.433 Active Namespaces 00:26:20.433 ================= 00:26:20.433 Namespace ID:1 00:26:20.433 Error Recovery Timeout: Unlimited 00:26:20.433 Command Set Identifier: NVM (00h) 00:26:20.433 Deallocate: Supported 00:26:20.433 Deallocated/Unwritten Error: Supported 00:26:20.433 Deallocated Read Value: All 0x00 00:26:20.433 Deallocate in Write Zeroes: Not Supported 00:26:20.433 Deallocated Guard Field: 0xFFFF 00:26:20.433 Flush: Supported 00:26:20.433 Reservation: Not Supported 00:26:20.433 Namespace Sharing Capabilities: Multiple Controllers 00:26:20.433 Size (in LBAs): 262144 (1GiB) 00:26:20.433 Capacity (in LBAs): 262144 (1GiB) 00:26:20.433 Utilization (in LBAs): 262144 (1GiB) 00:26:20.433 Thin Provisioning: Not Supported 00:26:20.433 Per-NS Atomic Units: No 00:26:20.433 Maximum Single Source Range Length: 128 00:26:20.433 Maximum Copy Length: 128 00:26:20.433 Maximum Source Range Count: 128 00:26:20.433 NGUID/EUI64 Never Reused: No 00:26:20.433 Namespace Write Protected: No 00:26:20.433 Endurance group ID: 1 00:26:20.433 Number of LBA Formats: 8 00:26:20.433 Current LBA Format: LBA Format #04 00:26:20.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:20.433 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:20.433 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:20.433 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:20.433 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:20.433 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:20.433 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:20.433 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:20.433 00:26:20.433 Get Feature FDP: 00:26:20.433 ================ 00:26:20.433 Enabled: Yes 00:26:20.433 FDP configuration index: 0 00:26:20.433 00:26:20.433 FDP configurations log page 00:26:20.433 =========================== 00:26:20.433 Number of FDP configurations: 1 00:26:20.433 Version: 0 00:26:20.433 Size: 112 00:26:20.433 FDP Configuration Descriptor: 0 00:26:20.433 Descriptor Size: 96 00:26:20.433 Reclaim Group Identifier format: 2 00:26:20.433 FDP Volatile Write Cache: Not Present 00:26:20.433 FDP Configuration: Valid 00:26:20.433 Vendor Specific Size: 0 00:26:20.434 Number of Reclaim Groups: 2 00:26:20.434 Number of Recalim Unit Handles: 8 00:26:20.434 Max Placement Identifiers: 128 00:26:20.434 Number of Namespaces Suppprted: 256 00:26:20.434 Reclaim unit Nominal Size: 6000000 bytes 00:26:20.434 Estimated Reclaim Unit Time Limit: Not Reported 00:26:20.434 RUH Desc #000: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #001: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #002: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #003: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #004: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #005: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #006: RUH Type: Initially Isolated 00:26:20.434 RUH Desc #007: RUH Type: Initially Isolated 00:26:20.434 00:26:20.434 FDP reclaim unit handle usage log page 00:26:20.434 ====================================== 00:26:20.434 Number of Reclaim Unit Handles: 8 00:26:20.434 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:26:20.434 RUH Usage Desc #001: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #002: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #003: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #004: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #005: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #006: RUH Attributes: Unused 00:26:20.434 RUH Usage Desc #007: RUH Attributes: Unused 00:26:20.434 00:26:20.434 FDP statistics log page 00:26:20.434 ======================= 00:26:20.434 Host bytes with metadata written: 438673408 00:26:20.434 Media bytes with metadata written: 438738944 00:26:20.434 Media bytes erased: 0 00:26:20.434 00:26:20.434 FDP events log page 00:26:20.434 =================== 00:26:20.434 Number of FDP events: 0 00:26:20.434 00:26:20.434 NVM Specific Namespace Data 00:26:20.434 =========================== 00:26:20.434 Logical Block Storage Tag Mask: 0 00:26:20.434 Protection Information Capabilities: 00:26:20.434 16b Guard Protection Information Storage Tag Support: No 00:26:20.434 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:26:20.434 Storage Tag Check Read Support: No 00:26:20.434 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:26:20.434 00:26:20.434 real 0m1.652s 00:26:20.434 user 0m0.664s 00:26:20.434 sys 0m0.777s 00:26:20.434 06:52:52 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.434 ************************************ 00:26:20.434 END TEST nvme_identify 00:26:20.434 ************************************ 00:26:20.434 06:52:52 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:26:20.434 06:52:52 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:26:20.434 06:52:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.434 06:52:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.434 06:52:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:20.434 ************************************ 00:26:20.434 START TEST nvme_perf 00:26:20.434 ************************************ 00:26:20.434 06:52:52 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:26:20.434 06:52:52 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:26:21.811 Initializing NVMe Controllers 00:26:21.811 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:21.811 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:21.811 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:21.811 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:21.811 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:21.811 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:26:21.811 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:26:21.811 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:26:21.811 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:26:21.811 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:26:21.811 Initialization complete. Launching workers. 00:26:21.811 ======================================================== 00:26:21.811 Latency(us) 00:26:21.811 Device Information : IOPS MiB/s Average min max 00:26:21.811 PCIE (0000:00:10.0) NSID 1 from core 0: 13822.74 161.99 9280.64 7480.30 43844.80 00:26:21.811 PCIE (0000:00:11.0) NSID 1 from core 0: 13822.74 161.99 9267.15 7571.50 41528.90 00:26:21.811 PCIE (0000:00:13.0) NSID 1 from core 0: 13822.74 161.99 9251.71 7589.53 40226.80 00:26:21.811 PCIE (0000:00:12.0) NSID 1 from core 0: 13822.74 161.99 9234.70 7611.50 38371.39 00:26:21.811 PCIE (0000:00:12.0) NSID 2 from core 0: 13822.74 161.99 9217.60 7651.14 36296.07 00:26:21.811 PCIE (0000:00:12.0) NSID 3 from core 0: 13886.74 162.74 9158.05 7628.63 28406.73 00:26:21.811 ======================================================== 00:26:21.811 Total : 83000.45 972.66 9234.92 7480.30 43844.80 00:26:21.811 00:26:21.811 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:26:21.811 ================================================================================= 00:26:21.811 1.00000% : 7804.742us 00:26:21.811 10.00000% : 8102.633us 00:26:21.811 25.00000% : 8400.524us 00:26:21.811 50.00000% : 8817.571us 00:26:21.811 75.00000% : 9353.775us 00:26:21.811 90.00000% : 10128.291us 00:26:21.811 95.00000% : 11200.698us 00:26:21.811 98.00000% : 13226.356us 00:26:21.811 99.00000% : 15371.171us 00:26:21.811 99.50000% : 37176.785us 00:26:21.811 99.90000% : 43611.229us 00:26:21.811 99.99000% : 43849.542us 00:26:21.811 99.99900% : 43849.542us 00:26:21.811 99.99990% : 43849.542us 00:26:21.811 99.99999% : 43849.542us 00:26:21.811 00:26:21.811 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:26:21.811 ================================================================================= 00:26:21.811 1.00000% : 7864.320us 00:26:21.811 10.00000% : 8162.211us 00:26:21.811 25.00000% : 8400.524us 00:26:21.811 50.00000% : 8817.571us 00:26:21.811 75.00000% : 9294.196us 00:26:21.811 90.00000% : 10068.713us 00:26:21.811 95.00000% : 11260.276us 00:26:21.811 98.00000% : 13226.356us 00:26:21.811 99.00000% : 15371.171us 00:26:21.811 99.50000% : 35031.971us 00:26:21.811 99.90000% : 41228.102us 00:26:21.811 99.99000% : 41704.727us 00:26:21.811 99.99900% : 41704.727us 00:26:21.811 99.99990% : 41704.727us 00:26:21.811 99.99999% : 41704.727us 00:26:21.811 00:26:21.811 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:26:21.811 ================================================================================= 00:26:21.811 1.00000% : 7864.320us 00:26:21.811 10.00000% : 8162.211us 00:26:21.811 25.00000% : 8400.524us 00:26:21.811 50.00000% : 8817.571us 00:26:21.811 75.00000% : 9353.775us 00:26:21.811 90.00000% : 10128.291us 00:26:21.811 95.00000% : 11141.120us 00:26:21.811 98.00000% : 13166.778us 00:26:21.811 99.00000% : 15013.702us 00:26:21.811 99.50000% : 33602.095us 00:26:21.811 99.90000% : 40036.538us 00:26:21.811 99.99000% : 40274.851us 00:26:21.811 99.99900% : 40274.851us 00:26:21.811 99.99990% : 40274.851us 00:26:21.811 99.99999% : 40274.851us 00:26:21.811 00:26:21.811 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:26:21.811 ================================================================================= 00:26:21.811 1.00000% : 7864.320us 00:26:21.811 10.00000% : 8162.211us 00:26:21.811 25.00000% : 8400.524us 00:26:21.811 50.00000% : 8817.571us 00:26:21.811 75.00000% : 9353.775us 00:26:21.811 90.00000% : 10068.713us 00:26:21.811 95.00000% : 11081.542us 00:26:21.811 98.00000% : 13285.935us 00:26:21.811 99.00000% : 14537.076us 00:26:21.811 99.50000% : 31218.967us 00:26:21.811 99.90000% : 38130.036us 00:26:21.811 99.99000% : 38368.349us 00:26:21.811 99.99900% : 38606.662us 00:26:21.811 99.99990% : 38606.662us 00:26:21.811 99.99999% : 38606.662us 00:26:21.811 00:26:21.811 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:26:21.811 ================================================================================= 00:26:21.811 1.00000% : 7864.320us 00:26:21.811 10.00000% : 8162.211us 00:26:21.811 25.00000% : 8400.524us 00:26:21.811 50.00000% : 8817.571us 00:26:21.811 75.00000% : 9353.775us 00:26:21.811 90.00000% : 10068.713us 00:26:21.811 95.00000% : 10962.385us 00:26:21.811 98.00000% : 13583.825us 00:26:21.811 99.00000% : 15371.171us 00:26:21.811 99.50000% : 29074.153us 00:26:21.811 99.90000% : 35985.222us 00:26:21.811 99.99000% : 36461.847us 00:26:21.812 99.99900% : 36461.847us 00:26:21.812 99.99990% : 36461.847us 00:26:21.812 99.99999% : 36461.847us 00:26:21.812 00:26:21.812 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:26:21.812 ================================================================================= 00:26:21.812 1.00000% : 7864.320us 00:26:21.812 10.00000% : 8162.211us 00:26:21.812 25.00000% : 8400.524us 00:26:21.812 50.00000% : 8817.571us 00:26:21.812 75.00000% : 9353.775us 00:26:21.812 90.00000% : 10128.291us 00:26:21.812 95.00000% : 11200.698us 00:26:21.812 98.00000% : 13464.669us 00:26:21.812 99.00000% : 15371.171us 00:26:21.812 99.50000% : 21686.458us 00:26:21.812 99.90000% : 28120.902us 00:26:21.812 99.99000% : 28478.371us 00:26:21.812 99.99900% : 28478.371us 00:26:21.812 99.99990% : 28478.371us 00:26:21.812 99.99999% : 28478.371us 00:26:21.812 00:26:21.812 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:26:21.812 ============================================================================== 00:26:21.812 Range in us Cumulative IO count 00:26:21.812 7477.062 - 7506.851: 0.0145% ( 2) 00:26:21.812 7506.851 - 7536.640: 0.0289% ( 2) 00:26:21.812 7536.640 - 7566.429: 0.0362% ( 1) 00:26:21.812 7566.429 - 7596.218: 0.0651% ( 4) 00:26:21.812 7596.218 - 7626.007: 0.1085% ( 6) 00:26:21.812 7626.007 - 7685.585: 0.3545% ( 34) 00:26:21.812 7685.585 - 7745.164: 0.8174% ( 64) 00:26:21.812 7745.164 - 7804.742: 1.6493% ( 115) 00:26:21.812 7804.742 - 7864.320: 2.9152% ( 175) 00:26:21.812 7864.320 - 7923.898: 4.6224% ( 236) 00:26:21.812 7923.898 - 7983.476: 6.8359% ( 306) 00:26:21.812 7983.476 - 8043.055: 9.1942% ( 326) 00:26:21.812 8043.055 - 8102.633: 11.8273% ( 364) 00:26:21.812 8102.633 - 8162.211: 14.5761% ( 380) 00:26:21.812 8162.211 - 8221.789: 17.5420% ( 410) 00:26:21.812 8221.789 - 8281.367: 20.4716% ( 405) 00:26:21.812 8281.367 - 8340.945: 23.5532% ( 426) 00:26:21.812 8340.945 - 8400.524: 26.5263% ( 411) 00:26:21.812 8400.524 - 8460.102: 29.7381% ( 444) 00:26:21.812 8460.102 - 8519.680: 33.0078% ( 452) 00:26:21.812 8519.680 - 8579.258: 36.1979% ( 441) 00:26:21.812 8579.258 - 8638.836: 39.7208% ( 487) 00:26:21.812 8638.836 - 8698.415: 43.3955% ( 508) 00:26:21.812 8698.415 - 8757.993: 46.9184% ( 487) 00:26:21.812 8757.993 - 8817.571: 50.6366% ( 514) 00:26:21.812 8817.571 - 8877.149: 54.1739% ( 489) 00:26:21.812 8877.149 - 8936.727: 57.5738% ( 470) 00:26:21.812 8936.727 - 8996.305: 60.9375% ( 465) 00:26:21.812 8996.305 - 9055.884: 64.1204% ( 440) 00:26:21.812 9055.884 - 9115.462: 67.1513% ( 419) 00:26:21.812 9115.462 - 9175.040: 69.8929% ( 379) 00:26:21.812 9175.040 - 9234.618: 72.3163% ( 335) 00:26:21.812 9234.618 - 9294.196: 74.4719% ( 298) 00:26:21.812 9294.196 - 9353.775: 76.4468% ( 273) 00:26:21.812 9353.775 - 9413.353: 78.1033% ( 229) 00:26:21.812 9413.353 - 9472.931: 79.5211% ( 196) 00:26:21.812 9472.931 - 9532.509: 80.9968% ( 204) 00:26:21.812 9532.509 - 9592.087: 82.3206% ( 183) 00:26:21.812 9592.087 - 9651.665: 83.5865% ( 175) 00:26:21.812 9651.665 - 9711.244: 84.7656% ( 163) 00:26:21.812 9711.244 - 9770.822: 85.8507% ( 150) 00:26:21.812 9770.822 - 9830.400: 86.8345% ( 136) 00:26:21.812 9830.400 - 9889.978: 87.7532% ( 127) 00:26:21.812 9889.978 - 9949.556: 88.6285% ( 121) 00:26:21.812 9949.556 - 10009.135: 89.3519% ( 100) 00:26:21.812 10009.135 - 10068.713: 89.9523% ( 83) 00:26:21.812 10068.713 - 10128.291: 90.4514% ( 69) 00:26:21.812 10128.291 - 10187.869: 90.9288% ( 66) 00:26:21.812 10187.869 - 10247.447: 91.3194% ( 54) 00:26:21.812 10247.447 - 10307.025: 91.6667% ( 48) 00:26:21.812 10307.025 - 10366.604: 91.9922% ( 45) 00:26:21.812 10366.604 - 10426.182: 92.3466% ( 49) 00:26:21.812 10426.182 - 10485.760: 92.6432% ( 41) 00:26:21.812 10485.760 - 10545.338: 92.9109% ( 37) 00:26:21.812 10545.338 - 10604.916: 93.1785% ( 37) 00:26:21.812 10604.916 - 10664.495: 93.4172% ( 33) 00:26:21.812 10664.495 - 10724.073: 93.6198% ( 28) 00:26:21.812 10724.073 - 10783.651: 93.8440% ( 31) 00:26:21.812 10783.651 - 10843.229: 94.0394% ( 27) 00:26:21.812 10843.229 - 10902.807: 94.2419% ( 28) 00:26:21.812 10902.807 - 10962.385: 94.4155% ( 24) 00:26:21.812 10962.385 - 11021.964: 94.6036% ( 26) 00:26:21.812 11021.964 - 11081.542: 94.7844% ( 25) 00:26:21.812 11081.542 - 11141.120: 94.9146% ( 18) 00:26:21.812 11141.120 - 11200.698: 95.0810% ( 23) 00:26:21.812 11200.698 - 11260.276: 95.2112% ( 18) 00:26:21.812 11260.276 - 11319.855: 95.3487% ( 19) 00:26:21.812 11319.855 - 11379.433: 95.4644% ( 16) 00:26:21.812 11379.433 - 11439.011: 95.5874% ( 17) 00:26:21.812 11439.011 - 11498.589: 95.7104% ( 17) 00:26:21.812 11498.589 - 11558.167: 95.8406% ( 18) 00:26:21.812 11558.167 - 11617.745: 95.9780% ( 19) 00:26:21.812 11617.745 - 11677.324: 96.0648% ( 12) 00:26:21.812 11677.324 - 11736.902: 96.1733% ( 15) 00:26:21.812 11736.902 - 11796.480: 96.2601% ( 12) 00:26:21.812 11796.480 - 11856.058: 96.3542% ( 13) 00:26:21.812 11856.058 - 11915.636: 96.4193% ( 9) 00:26:21.812 11915.636 - 11975.215: 96.4699% ( 7) 00:26:21.812 11975.215 - 12034.793: 96.5278% ( 8) 00:26:21.812 12034.793 - 12094.371: 96.5784% ( 7) 00:26:21.812 12094.371 - 12153.949: 96.6363% ( 8) 00:26:21.812 12153.949 - 12213.527: 96.7159% ( 11) 00:26:21.812 12213.527 - 12273.105: 96.7810% ( 9) 00:26:21.812 12273.105 - 12332.684: 96.8533% ( 10) 00:26:21.812 12332.684 - 12392.262: 96.9473% ( 13) 00:26:21.812 12392.262 - 12451.840: 97.0197% ( 10) 00:26:21.812 12451.840 - 12511.418: 97.1065% ( 12) 00:26:21.812 12511.418 - 12570.996: 97.2005% ( 13) 00:26:21.812 12570.996 - 12630.575: 97.3090% ( 15) 00:26:21.812 12630.575 - 12690.153: 97.3958% ( 12) 00:26:21.812 12690.153 - 12749.731: 97.4609% ( 9) 00:26:21.812 12749.731 - 12809.309: 97.5333% ( 10) 00:26:21.812 12809.309 - 12868.887: 97.6056% ( 10) 00:26:21.812 12868.887 - 12928.465: 97.6707% ( 9) 00:26:21.812 12928.465 - 12988.044: 97.7431% ( 10) 00:26:21.812 12988.044 - 13047.622: 97.8009% ( 8) 00:26:21.812 13047.622 - 13107.200: 97.8877% ( 12) 00:26:21.812 13107.200 - 13166.778: 97.9601% ( 10) 00:26:21.812 13166.778 - 13226.356: 98.0324% ( 10) 00:26:21.812 13226.356 - 13285.935: 98.0830% ( 7) 00:26:21.812 13285.935 - 13345.513: 98.1626% ( 11) 00:26:21.812 13345.513 - 13405.091: 98.2422% ( 11) 00:26:21.812 13405.091 - 13464.669: 98.3073% ( 9) 00:26:21.812 13464.669 - 13524.247: 98.3652% ( 8) 00:26:21.812 13524.247 - 13583.825: 98.4303% ( 9) 00:26:21.812 13583.825 - 13643.404: 98.4881% ( 8) 00:26:21.812 13643.404 - 13702.982: 98.5388% ( 7) 00:26:21.812 13702.982 - 13762.560: 98.5677% ( 4) 00:26:21.812 13762.560 - 13822.138: 98.5894% ( 3) 00:26:21.812 13822.138 - 13881.716: 98.6111% ( 3) 00:26:21.812 14179.607 - 14239.185: 98.6183% ( 1) 00:26:21.812 14239.185 - 14298.764: 98.6473% ( 4) 00:26:21.812 14298.764 - 14358.342: 98.6617% ( 2) 00:26:21.812 14358.342 - 14417.920: 98.6762% ( 2) 00:26:21.812 14417.920 - 14477.498: 98.7052% ( 4) 00:26:21.812 14477.498 - 14537.076: 98.7196% ( 2) 00:26:21.812 14537.076 - 14596.655: 98.7413% ( 3) 00:26:21.812 14596.655 - 14656.233: 98.7630% ( 3) 00:26:21.812 14656.233 - 14715.811: 98.7775% ( 2) 00:26:21.812 14715.811 - 14775.389: 98.8064% ( 4) 00:26:21.812 14775.389 - 14834.967: 98.8209% ( 2) 00:26:21.812 14834.967 - 14894.545: 98.8354% ( 2) 00:26:21.812 14894.545 - 14954.124: 98.8571% ( 3) 00:26:21.812 14954.124 - 15013.702: 98.8715% ( 2) 00:26:21.812 15013.702 - 15073.280: 98.9005% ( 4) 00:26:21.812 15073.280 - 15132.858: 98.9149% ( 2) 00:26:21.812 15132.858 - 15192.436: 98.9366% ( 3) 00:26:21.812 15192.436 - 15252.015: 98.9656% ( 4) 00:26:21.812 15252.015 - 15371.171: 99.0090% ( 6) 00:26:21.812 15371.171 - 15490.327: 99.0451% ( 5) 00:26:21.812 15490.327 - 15609.484: 99.0741% ( 4) 00:26:21.812 34793.658 - 35031.971: 99.0813% ( 1) 00:26:21.812 35031.971 - 35270.284: 99.1247% ( 6) 00:26:21.812 35270.284 - 35508.596: 99.1681% ( 6) 00:26:21.812 35508.596 - 35746.909: 99.2188% ( 7) 00:26:21.812 35746.909 - 35985.222: 99.2694% ( 7) 00:26:21.812 35985.222 - 36223.535: 99.3200% ( 7) 00:26:21.812 36223.535 - 36461.847: 99.3779% ( 8) 00:26:21.812 36461.847 - 36700.160: 99.4213% ( 6) 00:26:21.812 36700.160 - 36938.473: 99.4647% ( 6) 00:26:21.812 36938.473 - 37176.785: 99.5153% ( 7) 00:26:21.812 37176.785 - 37415.098: 99.5370% ( 3) 00:26:21.812 41704.727 - 41943.040: 99.5660% ( 4) 00:26:21.812 41943.040 - 42181.353: 99.6238% ( 8) 00:26:21.812 42181.353 - 42419.665: 99.6817% ( 8) 00:26:21.812 42419.665 - 42657.978: 99.7396% ( 8) 00:26:21.812 42657.978 - 42896.291: 99.7830% ( 6) 00:26:21.812 42896.291 - 43134.604: 99.8409% ( 8) 00:26:21.812 43134.604 - 43372.916: 99.8915% ( 7) 00:26:21.812 43372.916 - 43611.229: 99.9494% ( 8) 00:26:21.812 43611.229 - 43849.542: 100.0000% ( 7) 00:26:21.812 00:26:21.812 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:26:21.812 ============================================================================== 00:26:21.812 Range in us Cumulative IO count 00:26:21.812 7566.429 - 7596.218: 0.0145% ( 2) 00:26:21.812 7596.218 - 7626.007: 0.0289% ( 2) 00:26:21.812 7626.007 - 7685.585: 0.0940% ( 9) 00:26:21.812 7685.585 - 7745.164: 0.2242% ( 18) 00:26:21.812 7745.164 - 7804.742: 0.5859% ( 50) 00:26:21.812 7804.742 - 7864.320: 1.1791% ( 82) 00:26:21.812 7864.320 - 7923.898: 2.3076% ( 156) 00:26:21.812 7923.898 - 7983.476: 3.9497% ( 227) 00:26:21.812 7983.476 - 8043.055: 6.1849% ( 309) 00:26:21.812 8043.055 - 8102.633: 8.9627% ( 384) 00:26:21.812 8102.633 - 8162.211: 11.8851% ( 404) 00:26:21.812 8162.211 - 8221.789: 15.1548% ( 452) 00:26:21.812 8221.789 - 8281.367: 18.5330% ( 467) 00:26:21.813 8281.367 - 8340.945: 22.0486% ( 486) 00:26:21.813 8340.945 - 8400.524: 25.5642% ( 486) 00:26:21.813 8400.524 - 8460.102: 29.2101% ( 504) 00:26:21.813 8460.102 - 8519.680: 32.9210% ( 513) 00:26:21.813 8519.680 - 8579.258: 36.6247% ( 512) 00:26:21.813 8579.258 - 8638.836: 40.3791% ( 519) 00:26:21.813 8638.836 - 8698.415: 44.1913% ( 527) 00:26:21.813 8698.415 - 8757.993: 48.0686% ( 536) 00:26:21.813 8757.993 - 8817.571: 51.9604% ( 538) 00:26:21.813 8817.571 - 8877.149: 55.7147% ( 519) 00:26:21.813 8877.149 - 8936.727: 59.3171% ( 498) 00:26:21.813 8936.727 - 8996.305: 62.6953% ( 467) 00:26:21.813 8996.305 - 9055.884: 65.8492% ( 436) 00:26:21.813 9055.884 - 9115.462: 68.6632% ( 389) 00:26:21.813 9115.462 - 9175.040: 71.1227% ( 340) 00:26:21.813 9175.040 - 9234.618: 73.3362% ( 306) 00:26:21.813 9234.618 - 9294.196: 75.2532% ( 265) 00:26:21.813 9294.196 - 9353.775: 77.0906% ( 254) 00:26:21.813 9353.775 - 9413.353: 78.7977% ( 236) 00:26:21.813 9413.353 - 9472.931: 80.3241% ( 211) 00:26:21.813 9472.931 - 9532.509: 81.7274% ( 194) 00:26:21.813 9532.509 - 9592.087: 82.9861% ( 174) 00:26:21.813 9592.087 - 9651.665: 84.1869% ( 166) 00:26:21.813 9651.665 - 9711.244: 85.4094% ( 169) 00:26:21.813 9711.244 - 9770.822: 86.5524% ( 158) 00:26:21.813 9770.822 - 9830.400: 87.5579% ( 139) 00:26:21.813 9830.400 - 9889.978: 88.4259% ( 120) 00:26:21.813 9889.978 - 9949.556: 89.1276% ( 97) 00:26:21.813 9949.556 - 10009.135: 89.7425% ( 85) 00:26:21.813 10009.135 - 10068.713: 90.2416% ( 69) 00:26:21.813 10068.713 - 10128.291: 90.7046% ( 64) 00:26:21.813 10128.291 - 10187.869: 91.0663% ( 50) 00:26:21.813 10187.869 - 10247.447: 91.4352% ( 51) 00:26:21.813 10247.447 - 10307.025: 91.7462% ( 43) 00:26:21.813 10307.025 - 10366.604: 92.0790% ( 46) 00:26:21.813 10366.604 - 10426.182: 92.3683% ( 40) 00:26:21.813 10426.182 - 10485.760: 92.6143% ( 34) 00:26:21.813 10485.760 - 10545.338: 92.8892% ( 38) 00:26:21.813 10545.338 - 10604.916: 93.0917% ( 28) 00:26:21.813 10604.916 - 10664.495: 93.2726% ( 25) 00:26:21.813 10664.495 - 10724.073: 93.5113% ( 33) 00:26:21.813 10724.073 - 10783.651: 93.7211% ( 29) 00:26:21.813 10783.651 - 10843.229: 93.9308% ( 29) 00:26:21.813 10843.229 - 10902.807: 94.1117% ( 25) 00:26:21.813 10902.807 - 10962.385: 94.2853% ( 24) 00:26:21.813 10962.385 - 11021.964: 94.4517% ( 23) 00:26:21.813 11021.964 - 11081.542: 94.6325% ( 25) 00:26:21.813 11081.542 - 11141.120: 94.7844% ( 21) 00:26:21.813 11141.120 - 11200.698: 94.9002% ( 16) 00:26:21.813 11200.698 - 11260.276: 95.0521% ( 21) 00:26:21.813 11260.276 - 11319.855: 95.1606% ( 15) 00:26:21.813 11319.855 - 11379.433: 95.2980% ( 19) 00:26:21.813 11379.433 - 11439.011: 95.4210% ( 17) 00:26:21.813 11439.011 - 11498.589: 95.5223% ( 14) 00:26:21.813 11498.589 - 11558.167: 95.6236% ( 14) 00:26:21.813 11558.167 - 11617.745: 95.6959% ( 10) 00:26:21.813 11617.745 - 11677.324: 95.7682% ( 10) 00:26:21.813 11677.324 - 11736.902: 95.8189% ( 7) 00:26:21.813 11736.902 - 11796.480: 95.8912% ( 10) 00:26:21.813 11796.480 - 11856.058: 95.9491% ( 8) 00:26:21.813 11856.058 - 11915.636: 95.9997% ( 7) 00:26:21.813 11915.636 - 11975.215: 96.0359% ( 5) 00:26:21.813 11975.215 - 12034.793: 96.0720% ( 5) 00:26:21.813 12034.793 - 12094.371: 96.1155% ( 6) 00:26:21.813 12094.371 - 12153.949: 96.1733% ( 8) 00:26:21.813 12153.949 - 12213.527: 96.2457% ( 10) 00:26:21.813 12213.527 - 12273.105: 96.3180% ( 10) 00:26:21.813 12273.105 - 12332.684: 96.4120% ( 13) 00:26:21.813 12332.684 - 12392.262: 96.5205% ( 15) 00:26:21.813 12392.262 - 12451.840: 96.6435% ( 17) 00:26:21.813 12451.840 - 12511.418: 96.7665% ( 17) 00:26:21.813 12511.418 - 12570.996: 96.8678% ( 14) 00:26:21.813 12570.996 - 12630.575: 96.9763% ( 15) 00:26:21.813 12630.575 - 12690.153: 97.0920% ( 16) 00:26:21.813 12690.153 - 12749.731: 97.1933% ( 14) 00:26:21.813 12749.731 - 12809.309: 97.2801% ( 12) 00:26:21.813 12809.309 - 12868.887: 97.4175% ( 19) 00:26:21.813 12868.887 - 12928.465: 97.5260% ( 15) 00:26:21.813 12928.465 - 12988.044: 97.6635% ( 19) 00:26:21.813 12988.044 - 13047.622: 97.7720% ( 15) 00:26:21.813 13047.622 - 13107.200: 97.8660% ( 13) 00:26:21.813 13107.200 - 13166.778: 97.9601% ( 13) 00:26:21.813 13166.778 - 13226.356: 98.0469% ( 12) 00:26:21.813 13226.356 - 13285.935: 98.1554% ( 15) 00:26:21.813 13285.935 - 13345.513: 98.2350% ( 11) 00:26:21.813 13345.513 - 13405.091: 98.3290% ( 13) 00:26:21.813 13405.091 - 13464.669: 98.3869% ( 8) 00:26:21.813 13464.669 - 13524.247: 98.4375% ( 7) 00:26:21.813 13524.247 - 13583.825: 98.4664% ( 4) 00:26:21.813 13583.825 - 13643.404: 98.4881% ( 3) 00:26:21.813 13643.404 - 13702.982: 98.5098% ( 3) 00:26:21.813 13702.982 - 13762.560: 98.5315% ( 3) 00:26:21.813 13762.560 - 13822.138: 98.5605% ( 4) 00:26:21.813 13822.138 - 13881.716: 98.5822% ( 3) 00:26:21.813 13881.716 - 13941.295: 98.6111% ( 4) 00:26:21.813 14239.185 - 14298.764: 98.6183% ( 1) 00:26:21.813 14298.764 - 14358.342: 98.6400% ( 3) 00:26:21.813 14358.342 - 14417.920: 98.6617% ( 3) 00:26:21.813 14417.920 - 14477.498: 98.6834% ( 3) 00:26:21.813 14477.498 - 14537.076: 98.7196% ( 5) 00:26:21.813 14537.076 - 14596.655: 98.7413% ( 3) 00:26:21.813 14596.655 - 14656.233: 98.7630% ( 3) 00:26:21.813 14656.233 - 14715.811: 98.7920% ( 4) 00:26:21.813 14715.811 - 14775.389: 98.8064% ( 2) 00:26:21.813 14775.389 - 14834.967: 98.8281% ( 3) 00:26:21.813 14834.967 - 14894.545: 98.8498% ( 3) 00:26:21.813 14894.545 - 14954.124: 98.8715% ( 3) 00:26:21.813 14954.124 - 15013.702: 98.8932% ( 3) 00:26:21.813 15013.702 - 15073.280: 98.9222% ( 4) 00:26:21.813 15073.280 - 15132.858: 98.9366% ( 2) 00:26:21.813 15132.858 - 15192.436: 98.9656% ( 4) 00:26:21.813 15192.436 - 15252.015: 98.9873% ( 3) 00:26:21.813 15252.015 - 15371.171: 99.0379% ( 7) 00:26:21.813 15371.171 - 15490.327: 99.0741% ( 5) 00:26:21.813 32887.156 - 33125.469: 99.0885% ( 2) 00:26:21.813 33125.469 - 33363.782: 99.1392% ( 7) 00:26:21.813 33363.782 - 33602.095: 99.1898% ( 7) 00:26:21.813 33602.095 - 33840.407: 99.2477% ( 8) 00:26:21.813 33840.407 - 34078.720: 99.3056% ( 8) 00:26:21.813 34078.720 - 34317.033: 99.3634% ( 8) 00:26:21.813 34317.033 - 34555.345: 99.4068% ( 6) 00:26:21.813 34555.345 - 34793.658: 99.4575% ( 7) 00:26:21.813 34793.658 - 35031.971: 99.5153% ( 8) 00:26:21.813 35031.971 - 35270.284: 99.5370% ( 3) 00:26:21.813 39559.913 - 39798.225: 99.5949% ( 8) 00:26:21.813 39798.225 - 40036.538: 99.6528% ( 8) 00:26:21.813 40036.538 - 40274.851: 99.7106% ( 8) 00:26:21.813 40274.851 - 40513.164: 99.7613% ( 7) 00:26:21.813 40513.164 - 40751.476: 99.8119% ( 7) 00:26:21.813 40751.476 - 40989.789: 99.8626% ( 7) 00:26:21.813 40989.789 - 41228.102: 99.9204% ( 8) 00:26:21.813 41228.102 - 41466.415: 99.9783% ( 8) 00:26:21.813 41466.415 - 41704.727: 100.0000% ( 3) 00:26:21.813 00:26:21.813 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:26:21.813 ============================================================================== 00:26:21.813 Range in us Cumulative IO count 00:26:21.813 7566.429 - 7596.218: 0.0145% ( 2) 00:26:21.813 7596.218 - 7626.007: 0.0217% ( 1) 00:26:21.813 7626.007 - 7685.585: 0.0651% ( 6) 00:26:21.813 7685.585 - 7745.164: 0.2098% ( 20) 00:26:21.813 7745.164 - 7804.742: 0.4919% ( 39) 00:26:21.813 7804.742 - 7864.320: 1.1140% ( 86) 00:26:21.813 7864.320 - 7923.898: 2.2352% ( 155) 00:26:21.813 7923.898 - 7983.476: 3.9207% ( 233) 00:26:21.813 7983.476 - 8043.055: 6.1704% ( 311) 00:26:21.813 8043.055 - 8102.633: 9.0278% ( 395) 00:26:21.813 8102.633 - 8162.211: 12.1889% ( 437) 00:26:21.813 8162.211 - 8221.789: 15.4948% ( 457) 00:26:21.813 8221.789 - 8281.367: 18.9019% ( 471) 00:26:21.813 8281.367 - 8340.945: 22.3814% ( 481) 00:26:21.813 8340.945 - 8400.524: 25.9549% ( 494) 00:26:21.813 8400.524 - 8460.102: 29.5718% ( 500) 00:26:21.813 8460.102 - 8519.680: 33.3406% ( 521) 00:26:21.813 8519.680 - 8579.258: 37.0081% ( 507) 00:26:21.813 8579.258 - 8638.836: 40.7697% ( 520) 00:26:21.813 8638.836 - 8698.415: 44.6325% ( 534) 00:26:21.813 8698.415 - 8757.993: 48.3145% ( 509) 00:26:21.813 8757.993 - 8817.571: 52.0833% ( 521) 00:26:21.813 8817.571 - 8877.149: 55.6134% ( 488) 00:26:21.813 8877.149 - 8936.727: 59.1580% ( 490) 00:26:21.813 8936.727 - 8996.305: 62.5145% ( 464) 00:26:21.813 8996.305 - 9055.884: 65.4659% ( 408) 00:26:21.813 9055.884 - 9115.462: 68.2075% ( 379) 00:26:21.813 9115.462 - 9175.040: 70.5078% ( 318) 00:26:21.813 9175.040 - 9234.618: 72.5260% ( 279) 00:26:21.813 9234.618 - 9294.196: 74.3779% ( 256) 00:26:21.813 9294.196 - 9353.775: 76.1429% ( 244) 00:26:21.813 9353.775 - 9413.353: 77.7127% ( 217) 00:26:21.813 9413.353 - 9472.931: 79.1739% ( 202) 00:26:21.813 9472.931 - 9532.509: 80.5483% ( 190) 00:26:21.813 9532.509 - 9592.087: 81.7853% ( 171) 00:26:21.813 9592.087 - 9651.665: 83.0512% ( 175) 00:26:21.813 9651.665 - 9711.244: 84.2593% ( 167) 00:26:21.813 9711.244 - 9770.822: 85.3950% ( 157) 00:26:21.813 9770.822 - 9830.400: 86.4945% ( 152) 00:26:21.813 9830.400 - 9889.978: 87.4928% ( 138) 00:26:21.813 9889.978 - 9949.556: 88.3536% ( 119) 00:26:21.813 9949.556 - 10009.135: 89.0987% ( 103) 00:26:21.813 10009.135 - 10068.713: 89.7425% ( 89) 00:26:21.813 10068.713 - 10128.291: 90.3139% ( 79) 00:26:21.813 10128.291 - 10187.869: 90.8927% ( 80) 00:26:21.813 10187.869 - 10247.447: 91.4207% ( 73) 00:26:21.813 10247.447 - 10307.025: 91.8837% ( 64) 00:26:21.813 10307.025 - 10366.604: 92.3032% ( 58) 00:26:21.813 10366.604 - 10426.182: 92.6722% ( 51) 00:26:21.813 10426.182 - 10485.760: 92.9905% ( 44) 00:26:21.814 10485.760 - 10545.338: 93.3087% ( 44) 00:26:21.814 10545.338 - 10604.916: 93.5619% ( 35) 00:26:21.814 10604.916 - 10664.495: 93.7572% ( 27) 00:26:21.814 10664.495 - 10724.073: 93.9381% ( 25) 00:26:21.814 10724.073 - 10783.651: 94.1189% ( 25) 00:26:21.814 10783.651 - 10843.229: 94.3142% ( 27) 00:26:21.814 10843.229 - 10902.807: 94.4951% ( 25) 00:26:21.814 10902.807 - 10962.385: 94.6687% ( 24) 00:26:21.814 10962.385 - 11021.964: 94.8061% ( 19) 00:26:21.814 11021.964 - 11081.542: 94.9074% ( 14) 00:26:21.814 11081.542 - 11141.120: 95.0231% ( 16) 00:26:21.814 11141.120 - 11200.698: 95.1389% ( 16) 00:26:21.814 11200.698 - 11260.276: 95.2474% ( 15) 00:26:21.814 11260.276 - 11319.855: 95.3559% ( 15) 00:26:21.814 11319.855 - 11379.433: 95.4789% ( 17) 00:26:21.814 11379.433 - 11439.011: 95.5874% ( 15) 00:26:21.814 11439.011 - 11498.589: 95.6597% ( 10) 00:26:21.814 11498.589 - 11558.167: 95.7321% ( 10) 00:26:21.814 11558.167 - 11617.745: 95.8116% ( 11) 00:26:21.814 11617.745 - 11677.324: 95.8695% ( 8) 00:26:21.814 11677.324 - 11736.902: 95.9418% ( 10) 00:26:21.814 11736.902 - 11796.480: 96.0142% ( 10) 00:26:21.814 11796.480 - 11856.058: 96.1010% ( 12) 00:26:21.814 11856.058 - 11915.636: 96.1950% ( 13) 00:26:21.814 11915.636 - 11975.215: 96.3108% ( 16) 00:26:21.814 11975.215 - 12034.793: 96.3903% ( 11) 00:26:21.814 12034.793 - 12094.371: 96.4988% ( 15) 00:26:21.814 12094.371 - 12153.949: 96.6073% ( 15) 00:26:21.814 12153.949 - 12213.527: 96.7159% ( 15) 00:26:21.814 12213.527 - 12273.105: 96.8461% ( 18) 00:26:21.814 12273.105 - 12332.684: 96.9618% ( 16) 00:26:21.814 12332.684 - 12392.262: 97.0775% ( 16) 00:26:21.814 12392.262 - 12451.840: 97.1788% ( 14) 00:26:21.814 12451.840 - 12511.418: 97.2729% ( 13) 00:26:21.814 12511.418 - 12570.996: 97.3669% ( 13) 00:26:21.814 12570.996 - 12630.575: 97.4465% ( 11) 00:26:21.814 12630.575 - 12690.153: 97.5405% ( 13) 00:26:21.814 12690.153 - 12749.731: 97.6056% ( 9) 00:26:21.814 12749.731 - 12809.309: 97.6924% ( 12) 00:26:21.814 12809.309 - 12868.887: 97.7648% ( 10) 00:26:21.814 12868.887 - 12928.465: 97.8226% ( 8) 00:26:21.814 12928.465 - 12988.044: 97.8877% ( 9) 00:26:21.814 12988.044 - 13047.622: 97.9384% ( 7) 00:26:21.814 13047.622 - 13107.200: 97.9962% ( 8) 00:26:21.814 13107.200 - 13166.778: 98.0541% ( 8) 00:26:21.814 13166.778 - 13226.356: 98.1264% ( 10) 00:26:21.814 13226.356 - 13285.935: 98.1916% ( 9) 00:26:21.814 13285.935 - 13345.513: 98.2422% ( 7) 00:26:21.814 13345.513 - 13405.091: 98.2856% ( 6) 00:26:21.814 13405.091 - 13464.669: 98.3073% ( 3) 00:26:21.814 13464.669 - 13524.247: 98.3290% ( 3) 00:26:21.814 13524.247 - 13583.825: 98.3507% ( 3) 00:26:21.814 13583.825 - 13643.404: 98.3869% ( 5) 00:26:21.814 13643.404 - 13702.982: 98.4230% ( 5) 00:26:21.814 13702.982 - 13762.560: 98.4664% ( 6) 00:26:21.814 13762.560 - 13822.138: 98.5026% ( 5) 00:26:21.814 13822.138 - 13881.716: 98.5532% ( 7) 00:26:21.814 13881.716 - 13941.295: 98.5894% ( 5) 00:26:21.814 13941.295 - 14000.873: 98.6328% ( 6) 00:26:21.814 14000.873 - 14060.451: 98.6762% ( 6) 00:26:21.814 14060.451 - 14120.029: 98.7196% ( 6) 00:26:21.814 14120.029 - 14179.607: 98.7630% ( 6) 00:26:21.814 14179.607 - 14239.185: 98.7920% ( 4) 00:26:21.814 14239.185 - 14298.764: 98.8064% ( 2) 00:26:21.814 14298.764 - 14358.342: 98.8281% ( 3) 00:26:21.814 14358.342 - 14417.920: 98.8426% ( 2) 00:26:21.814 14417.920 - 14477.498: 98.8571% ( 2) 00:26:21.814 14477.498 - 14537.076: 98.8715% ( 2) 00:26:21.814 14537.076 - 14596.655: 98.8932% ( 3) 00:26:21.814 14596.655 - 14656.233: 98.9077% ( 2) 00:26:21.814 14656.233 - 14715.811: 98.9222% ( 2) 00:26:21.814 14715.811 - 14775.389: 98.9366% ( 2) 00:26:21.814 14775.389 - 14834.967: 98.9583% ( 3) 00:26:21.814 14834.967 - 14894.545: 98.9728% ( 2) 00:26:21.814 14894.545 - 14954.124: 98.9873% ( 2) 00:26:21.814 14954.124 - 15013.702: 99.0017% ( 2) 00:26:21.814 15013.702 - 15073.280: 99.0234% ( 3) 00:26:21.814 15073.280 - 15132.858: 99.0379% ( 2) 00:26:21.814 15132.858 - 15192.436: 99.0524% ( 2) 00:26:21.814 15192.436 - 15252.015: 99.0668% ( 2) 00:26:21.814 15252.015 - 15371.171: 99.0741% ( 1) 00:26:21.814 31457.280 - 31695.593: 99.0885% ( 2) 00:26:21.814 31695.593 - 31933.905: 99.1392% ( 7) 00:26:21.814 31933.905 - 32172.218: 99.1970% ( 8) 00:26:21.814 32172.218 - 32410.531: 99.2549% ( 8) 00:26:21.814 32410.531 - 32648.844: 99.3200% ( 9) 00:26:21.814 32648.844 - 32887.156: 99.3707% ( 7) 00:26:21.814 32887.156 - 33125.469: 99.4285% ( 8) 00:26:21.814 33125.469 - 33363.782: 99.4864% ( 8) 00:26:21.814 33363.782 - 33602.095: 99.5370% ( 7) 00:26:21.814 38130.036 - 38368.349: 99.5587% ( 3) 00:26:21.814 38368.349 - 38606.662: 99.6166% ( 8) 00:26:21.814 38606.662 - 38844.975: 99.6672% ( 7) 00:26:21.814 38844.975 - 39083.287: 99.7251% ( 8) 00:26:21.814 39083.287 - 39321.600: 99.7830% ( 8) 00:26:21.814 39321.600 - 39559.913: 99.8336% ( 7) 00:26:21.814 39559.913 - 39798.225: 99.8915% ( 8) 00:26:21.814 39798.225 - 40036.538: 99.9494% ( 8) 00:26:21.814 40036.538 - 40274.851: 100.0000% ( 7) 00:26:21.814 00:26:21.814 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:26:21.814 ============================================================================== 00:26:21.814 Range in us Cumulative IO count 00:26:21.814 7596.218 - 7626.007: 0.0072% ( 1) 00:26:21.814 7626.007 - 7685.585: 0.0723% ( 9) 00:26:21.814 7685.585 - 7745.164: 0.2315% ( 22) 00:26:21.814 7745.164 - 7804.742: 0.5281% ( 41) 00:26:21.814 7804.742 - 7864.320: 1.1212% ( 82) 00:26:21.814 7864.320 - 7923.898: 2.1484% ( 142) 00:26:21.814 7923.898 - 7983.476: 3.7977% ( 228) 00:26:21.814 7983.476 - 8043.055: 6.0764% ( 315) 00:26:21.814 8043.055 - 8102.633: 8.8542% ( 384) 00:26:21.814 8102.633 - 8162.211: 11.9430% ( 427) 00:26:21.814 8162.211 - 8221.789: 15.2127% ( 452) 00:26:21.814 8221.789 - 8281.367: 18.7138% ( 484) 00:26:21.814 8281.367 - 8340.945: 22.1644% ( 477) 00:26:21.814 8340.945 - 8400.524: 25.7668% ( 498) 00:26:21.814 8400.524 - 8460.102: 29.3692% ( 498) 00:26:21.814 8460.102 - 8519.680: 33.0512% ( 509) 00:26:21.814 8519.680 - 8579.258: 36.7694% ( 514) 00:26:21.814 8579.258 - 8638.836: 40.5310% ( 520) 00:26:21.814 8638.836 - 8698.415: 44.3070% ( 522) 00:26:21.814 8698.415 - 8757.993: 47.9673% ( 506) 00:26:21.814 8757.993 - 8817.571: 51.5553% ( 496) 00:26:21.814 8817.571 - 8877.149: 55.0637% ( 485) 00:26:21.814 8877.149 - 8936.727: 58.5938% ( 488) 00:26:21.814 8936.727 - 8996.305: 62.0732% ( 481) 00:26:21.814 8996.305 - 9055.884: 65.1620% ( 427) 00:26:21.814 9055.884 - 9115.462: 67.9398% ( 384) 00:26:21.814 9115.462 - 9175.040: 70.3053% ( 327) 00:26:21.814 9175.040 - 9234.618: 72.3018% ( 276) 00:26:21.814 9234.618 - 9294.196: 74.2115% ( 264) 00:26:21.814 9294.196 - 9353.775: 75.9766% ( 244) 00:26:21.814 9353.775 - 9413.353: 77.6042% ( 225) 00:26:21.814 9413.353 - 9472.931: 79.1160% ( 209) 00:26:21.814 9472.931 - 9532.509: 80.4905% ( 190) 00:26:21.814 9532.509 - 9592.087: 81.8359% ( 186) 00:26:21.814 9592.087 - 9651.665: 83.0802% ( 172) 00:26:21.814 9651.665 - 9711.244: 84.3244% ( 172) 00:26:21.814 9711.244 - 9770.822: 85.5469% ( 169) 00:26:21.814 9770.822 - 9830.400: 86.6464% ( 152) 00:26:21.814 9830.400 - 9889.978: 87.7315% ( 150) 00:26:21.814 9889.978 - 9949.556: 88.6140% ( 122) 00:26:21.814 9949.556 - 10009.135: 89.3591% ( 103) 00:26:21.814 10009.135 - 10068.713: 90.0174% ( 91) 00:26:21.814 10068.713 - 10128.291: 90.6467% ( 87) 00:26:21.814 10128.291 - 10187.869: 91.2109% ( 78) 00:26:21.814 10187.869 - 10247.447: 91.6377% ( 59) 00:26:21.814 10247.447 - 10307.025: 92.0284% ( 54) 00:26:21.814 10307.025 - 10366.604: 92.3683% ( 47) 00:26:21.814 10366.604 - 10426.182: 92.7156% ( 48) 00:26:21.814 10426.182 - 10485.760: 93.0339% ( 44) 00:26:21.814 10485.760 - 10545.338: 93.3087% ( 38) 00:26:21.814 10545.338 - 10604.916: 93.5619% ( 35) 00:26:21.814 10604.916 - 10664.495: 93.8006% ( 33) 00:26:21.814 10664.495 - 10724.073: 94.0249% ( 31) 00:26:21.814 10724.073 - 10783.651: 94.2274% ( 28) 00:26:21.814 10783.651 - 10843.229: 94.4300% ( 28) 00:26:21.814 10843.229 - 10902.807: 94.6253% ( 27) 00:26:21.814 10902.807 - 10962.385: 94.8351% ( 29) 00:26:21.814 10962.385 - 11021.964: 94.9797% ( 20) 00:26:21.814 11021.964 - 11081.542: 95.0666% ( 12) 00:26:21.814 11081.542 - 11141.120: 95.1534% ( 12) 00:26:21.814 11141.120 - 11200.698: 95.2112% ( 8) 00:26:21.814 11200.698 - 11260.276: 95.2763% ( 9) 00:26:21.814 11260.276 - 11319.855: 95.3487% ( 10) 00:26:21.814 11319.855 - 11379.433: 95.4499% ( 14) 00:26:21.814 11379.433 - 11439.011: 95.5295% ( 11) 00:26:21.814 11439.011 - 11498.589: 95.5946% ( 9) 00:26:21.814 11498.589 - 11558.167: 95.6959% ( 14) 00:26:21.814 11558.167 - 11617.745: 95.7827% ( 12) 00:26:21.814 11617.745 - 11677.324: 95.8767% ( 13) 00:26:21.814 11677.324 - 11736.902: 95.9563% ( 11) 00:26:21.814 11736.902 - 11796.480: 96.0286% ( 10) 00:26:21.814 11796.480 - 11856.058: 96.1010% ( 10) 00:26:21.814 11856.058 - 11915.636: 96.1589% ( 8) 00:26:21.814 11915.636 - 11975.215: 96.2240% ( 9) 00:26:21.814 11975.215 - 12034.793: 96.2818% ( 8) 00:26:21.814 12034.793 - 12094.371: 96.3686% ( 12) 00:26:21.814 12094.371 - 12153.949: 96.4554% ( 12) 00:26:21.814 12153.949 - 12213.527: 96.5639% ( 15) 00:26:21.814 12213.527 - 12273.105: 96.6725% ( 15) 00:26:21.814 12273.105 - 12332.684: 96.8099% ( 19) 00:26:21.814 12332.684 - 12392.262: 96.9473% ( 19) 00:26:21.814 12392.262 - 12451.840: 97.0631% ( 16) 00:26:21.814 12451.840 - 12511.418: 97.1644% ( 14) 00:26:21.814 12511.418 - 12570.996: 97.2222% ( 8) 00:26:21.814 12570.996 - 12630.575: 97.2873% ( 9) 00:26:21.814 12630.575 - 12690.153: 97.3524% ( 9) 00:26:21.815 12690.153 - 12749.731: 97.4175% ( 9) 00:26:21.815 12749.731 - 12809.309: 97.4826% ( 9) 00:26:21.815 12809.309 - 12868.887: 97.5405% ( 8) 00:26:21.815 12868.887 - 12928.465: 97.6056% ( 9) 00:26:21.815 12928.465 - 12988.044: 97.6562% ( 7) 00:26:21.815 12988.044 - 13047.622: 97.7214% ( 9) 00:26:21.815 13047.622 - 13107.200: 97.7937% ( 10) 00:26:21.815 13107.200 - 13166.778: 97.8733% ( 11) 00:26:21.815 13166.778 - 13226.356: 97.9745% ( 14) 00:26:21.815 13226.356 - 13285.935: 98.0613% ( 12) 00:26:21.815 13285.935 - 13345.513: 98.1409% ( 11) 00:26:21.815 13345.513 - 13405.091: 98.2133% ( 10) 00:26:21.815 13405.091 - 13464.669: 98.2784% ( 9) 00:26:21.815 13464.669 - 13524.247: 98.3290% ( 7) 00:26:21.815 13524.247 - 13583.825: 98.3796% ( 7) 00:26:21.815 13583.825 - 13643.404: 98.4375% ( 8) 00:26:21.815 13643.404 - 13702.982: 98.5026% ( 9) 00:26:21.815 13702.982 - 13762.560: 98.5605% ( 8) 00:26:21.815 13762.560 - 13822.138: 98.6256% ( 9) 00:26:21.815 13822.138 - 13881.716: 98.6690% ( 6) 00:26:21.815 13881.716 - 13941.295: 98.7196% ( 7) 00:26:21.815 13941.295 - 14000.873: 98.7558% ( 5) 00:26:21.815 14000.873 - 14060.451: 98.7992% ( 6) 00:26:21.815 14060.451 - 14120.029: 98.8426% ( 6) 00:26:21.815 14120.029 - 14179.607: 98.8788% ( 5) 00:26:21.815 14179.607 - 14239.185: 98.9222% ( 6) 00:26:21.815 14239.185 - 14298.764: 98.9366% ( 2) 00:26:21.815 14298.764 - 14358.342: 98.9511% ( 2) 00:26:21.815 14358.342 - 14417.920: 98.9656% ( 2) 00:26:21.815 14417.920 - 14477.498: 98.9873% ( 3) 00:26:21.815 14477.498 - 14537.076: 99.0017% ( 2) 00:26:21.815 14537.076 - 14596.655: 99.0162% ( 2) 00:26:21.815 14596.655 - 14656.233: 99.0379% ( 3) 00:26:21.815 14656.233 - 14715.811: 99.0524% ( 2) 00:26:21.815 14715.811 - 14775.389: 99.0741% ( 3) 00:26:21.815 29312.465 - 29431.622: 99.0885% ( 2) 00:26:21.815 29431.622 - 29550.778: 99.1175% ( 4) 00:26:21.815 29550.778 - 29669.935: 99.1392% ( 3) 00:26:21.815 29669.935 - 29789.091: 99.1681% ( 4) 00:26:21.815 29789.091 - 29908.247: 99.2043% ( 5) 00:26:21.815 29908.247 - 30027.404: 99.2332% ( 4) 00:26:21.815 30027.404 - 30146.560: 99.2622% ( 4) 00:26:21.815 30146.560 - 30265.716: 99.2911% ( 4) 00:26:21.815 30265.716 - 30384.873: 99.3200% ( 4) 00:26:21.815 30384.873 - 30504.029: 99.3490% ( 4) 00:26:21.815 30504.029 - 30742.342: 99.4068% ( 8) 00:26:21.815 30742.342 - 30980.655: 99.4647% ( 8) 00:26:21.815 30980.655 - 31218.967: 99.5226% ( 8) 00:26:21.815 31218.967 - 31457.280: 99.5370% ( 2) 00:26:21.815 36223.535 - 36461.847: 99.5660% ( 4) 00:26:21.815 36461.847 - 36700.160: 99.6238% ( 8) 00:26:21.815 36700.160 - 36938.473: 99.6745% ( 7) 00:26:21.815 36938.473 - 37176.785: 99.7323% ( 8) 00:26:21.815 37176.785 - 37415.098: 99.7830% ( 7) 00:26:21.815 37415.098 - 37653.411: 99.8409% ( 8) 00:26:21.815 37653.411 - 37891.724: 99.8987% ( 8) 00:26:21.815 37891.724 - 38130.036: 99.9566% ( 8) 00:26:21.815 38130.036 - 38368.349: 99.9928% ( 5) 00:26:21.815 38368.349 - 38606.662: 100.0000% ( 1) 00:26:21.815 00:26:21.815 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:26:21.815 ============================================================================== 00:26:21.815 Range in us Cumulative IO count 00:26:21.815 7626.007 - 7685.585: 0.0434% ( 6) 00:26:21.815 7685.585 - 7745.164: 0.1953% ( 21) 00:26:21.815 7745.164 - 7804.742: 0.5208% ( 45) 00:26:21.815 7804.742 - 7864.320: 1.2008% ( 94) 00:26:21.815 7864.320 - 7923.898: 2.2280% ( 142) 00:26:21.815 7923.898 - 7983.476: 3.9786% ( 242) 00:26:21.815 7983.476 - 8043.055: 6.1994% ( 307) 00:26:21.815 8043.055 - 8102.633: 8.9988% ( 387) 00:26:21.815 8102.633 - 8162.211: 11.9141% ( 403) 00:26:21.815 8162.211 - 8221.789: 15.1403% ( 446) 00:26:21.815 8221.789 - 8281.367: 18.4462% ( 457) 00:26:21.815 8281.367 - 8340.945: 21.9618% ( 486) 00:26:21.815 8340.945 - 8400.524: 25.5715% ( 499) 00:26:21.815 8400.524 - 8460.102: 29.1811% ( 499) 00:26:21.815 8460.102 - 8519.680: 32.8921% ( 513) 00:26:21.815 8519.680 - 8579.258: 36.6681% ( 522) 00:26:21.815 8579.258 - 8638.836: 40.4369% ( 521) 00:26:21.815 8638.836 - 8698.415: 44.1479% ( 513) 00:26:21.815 8698.415 - 8757.993: 48.0324% ( 537) 00:26:21.815 8757.993 - 8817.571: 51.5914% ( 492) 00:26:21.815 8817.571 - 8877.149: 55.2951% ( 512) 00:26:21.815 8877.149 - 8936.727: 58.7963% ( 484) 00:26:21.815 8936.727 - 8996.305: 62.2323% ( 475) 00:26:21.815 8996.305 - 9055.884: 65.3284% ( 428) 00:26:21.815 9055.884 - 9115.462: 67.9977% ( 369) 00:26:21.815 9115.462 - 9175.040: 70.3921% ( 331) 00:26:21.815 9175.040 - 9234.618: 72.5622% ( 300) 00:26:21.815 9234.618 - 9294.196: 74.5877% ( 280) 00:26:21.815 9294.196 - 9353.775: 76.5263% ( 268) 00:26:21.815 9353.775 - 9413.353: 78.2118% ( 233) 00:26:21.815 9413.353 - 9472.931: 79.6875% ( 204) 00:26:21.815 9472.931 - 9532.509: 81.0619% ( 190) 00:26:21.815 9532.509 - 9592.087: 82.4002% ( 185) 00:26:21.815 9592.087 - 9651.665: 83.6661% ( 175) 00:26:21.815 9651.665 - 9711.244: 84.8958% ( 170) 00:26:21.815 9711.244 - 9770.822: 86.0605% ( 161) 00:26:21.815 9770.822 - 9830.400: 87.0587% ( 138) 00:26:21.815 9830.400 - 9889.978: 88.0787% ( 141) 00:26:21.815 9889.978 - 9949.556: 88.9685% ( 123) 00:26:21.815 9949.556 - 10009.135: 89.7642% ( 110) 00:26:21.815 10009.135 - 10068.713: 90.4731% ( 98) 00:26:21.815 10068.713 - 10128.291: 91.0663% ( 82) 00:26:21.815 10128.291 - 10187.869: 91.6088% ( 75) 00:26:21.815 10187.869 - 10247.447: 92.0428% ( 60) 00:26:21.815 10247.447 - 10307.025: 92.4552% ( 57) 00:26:21.815 10307.025 - 10366.604: 92.8241% ( 51) 00:26:21.815 10366.604 - 10426.182: 93.1641% ( 47) 00:26:21.815 10426.182 - 10485.760: 93.4462% ( 39) 00:26:21.815 10485.760 - 10545.338: 93.7283% ( 39) 00:26:21.815 10545.338 - 10604.916: 93.9742% ( 34) 00:26:21.815 10604.916 - 10664.495: 94.2130% ( 33) 00:26:21.815 10664.495 - 10724.073: 94.4083% ( 27) 00:26:21.815 10724.073 - 10783.651: 94.6181% ( 29) 00:26:21.815 10783.651 - 10843.229: 94.8134% ( 27) 00:26:21.815 10843.229 - 10902.807: 94.9870% ( 24) 00:26:21.815 10902.807 - 10962.385: 95.1678% ( 25) 00:26:21.815 10962.385 - 11021.964: 95.3125% ( 20) 00:26:21.815 11021.964 - 11081.542: 95.4065% ( 13) 00:26:21.815 11081.542 - 11141.120: 95.4789% ( 10) 00:26:21.815 11141.120 - 11200.698: 95.5512% ( 10) 00:26:21.815 11200.698 - 11260.276: 95.6380% ( 12) 00:26:21.815 11260.276 - 11319.855: 95.7104% ( 10) 00:26:21.815 11319.855 - 11379.433: 95.7972% ( 12) 00:26:21.815 11379.433 - 11439.011: 95.8406% ( 6) 00:26:21.815 11439.011 - 11498.589: 95.8840% ( 6) 00:26:21.815 11498.589 - 11558.167: 95.9201% ( 5) 00:26:21.815 11558.167 - 11617.745: 95.9491% ( 4) 00:26:21.815 11617.745 - 11677.324: 95.9852% ( 5) 00:26:21.815 11677.324 - 11736.902: 96.0142% ( 4) 00:26:21.815 11736.902 - 11796.480: 96.0503% ( 5) 00:26:21.815 11796.480 - 11856.058: 96.0865% ( 5) 00:26:21.815 11856.058 - 11915.636: 96.1227% ( 5) 00:26:21.815 11915.636 - 11975.215: 96.1589% ( 5) 00:26:21.815 11975.215 - 12034.793: 96.1950% ( 5) 00:26:21.815 12034.793 - 12094.371: 96.2312% ( 5) 00:26:21.815 12094.371 - 12153.949: 96.2891% ( 8) 00:26:21.815 12153.949 - 12213.527: 96.3325% ( 6) 00:26:21.815 12213.527 - 12273.105: 96.3831% ( 7) 00:26:21.815 12273.105 - 12332.684: 96.4337% ( 7) 00:26:21.815 12332.684 - 12392.262: 96.4916% ( 8) 00:26:21.815 12392.262 - 12451.840: 96.5278% ( 5) 00:26:21.815 12451.840 - 12511.418: 96.5712% ( 6) 00:26:21.815 12511.418 - 12570.996: 96.6218% ( 7) 00:26:21.815 12570.996 - 12630.575: 96.6652% ( 6) 00:26:21.815 12630.575 - 12690.153: 96.7303% ( 9) 00:26:21.815 12690.153 - 12749.731: 96.8027% ( 10) 00:26:21.815 12749.731 - 12809.309: 96.8822% ( 11) 00:26:21.815 12809.309 - 12868.887: 96.9618% ( 11) 00:26:21.815 12868.887 - 12928.465: 97.0775% ( 16) 00:26:21.815 12928.465 - 12988.044: 97.1861% ( 15) 00:26:21.815 12988.044 - 13047.622: 97.2873% ( 14) 00:26:21.815 13047.622 - 13107.200: 97.3814% ( 13) 00:26:21.815 13107.200 - 13166.778: 97.4899% ( 15) 00:26:21.815 13166.778 - 13226.356: 97.5839% ( 13) 00:26:21.815 13226.356 - 13285.935: 97.6852% ( 14) 00:26:21.815 13285.935 - 13345.513: 97.7575% ( 10) 00:26:21.815 13345.513 - 13405.091: 97.8516% ( 13) 00:26:21.815 13405.091 - 13464.669: 97.9167% ( 9) 00:26:21.815 13464.669 - 13524.247: 97.9673% ( 7) 00:26:21.815 13524.247 - 13583.825: 98.0252% ( 8) 00:26:21.815 13583.825 - 13643.404: 98.0830% ( 8) 00:26:21.815 13643.404 - 13702.982: 98.1481% ( 9) 00:26:21.815 13702.982 - 13762.560: 98.2060% ( 8) 00:26:21.815 13762.560 - 13822.138: 98.2639% ( 8) 00:26:21.815 13822.138 - 13881.716: 98.3290% ( 9) 00:26:21.815 13881.716 - 13941.295: 98.3796% ( 7) 00:26:21.815 13941.295 - 14000.873: 98.4447% ( 9) 00:26:21.815 14000.873 - 14060.451: 98.4809% ( 5) 00:26:21.815 14060.451 - 14120.029: 98.5171% ( 5) 00:26:21.815 14120.029 - 14179.607: 98.5460% ( 4) 00:26:21.815 14179.607 - 14239.185: 98.5749% ( 4) 00:26:21.815 14239.185 - 14298.764: 98.6039% ( 4) 00:26:21.815 14298.764 - 14358.342: 98.6111% ( 1) 00:26:21.815 14358.342 - 14417.920: 98.6256% ( 2) 00:26:21.815 14417.920 - 14477.498: 98.6473% ( 3) 00:26:21.815 14477.498 - 14537.076: 98.6762% ( 4) 00:26:21.815 14537.076 - 14596.655: 98.7052% ( 4) 00:26:21.815 14596.655 - 14656.233: 98.7341% ( 4) 00:26:21.815 14656.233 - 14715.811: 98.7558% ( 3) 00:26:21.815 14715.811 - 14775.389: 98.7703% ( 2) 00:26:21.815 14775.389 - 14834.967: 98.7992% ( 4) 00:26:21.815 14834.967 - 14894.545: 98.8209% ( 3) 00:26:21.815 14894.545 - 14954.124: 98.8426% ( 3) 00:26:21.815 14954.124 - 15013.702: 98.8643% ( 3) 00:26:21.815 15013.702 - 15073.280: 98.8932% ( 4) 00:26:21.816 15073.280 - 15132.858: 98.9149% ( 3) 00:26:21.816 15132.858 - 15192.436: 98.9366% ( 3) 00:26:21.816 15192.436 - 15252.015: 98.9583% ( 3) 00:26:21.816 15252.015 - 15371.171: 99.0017% ( 6) 00:26:21.816 15371.171 - 15490.327: 99.0451% ( 6) 00:26:21.816 15490.327 - 15609.484: 99.0741% ( 4) 00:26:21.816 27167.651 - 27286.807: 99.0885% ( 2) 00:26:21.816 27286.807 - 27405.964: 99.1102% ( 3) 00:26:21.816 27405.964 - 27525.120: 99.1392% ( 4) 00:26:21.816 27525.120 - 27644.276: 99.1681% ( 4) 00:26:21.816 27644.276 - 27763.433: 99.1970% ( 4) 00:26:21.816 27763.433 - 27882.589: 99.2332% ( 5) 00:26:21.816 27882.589 - 28001.745: 99.2549% ( 3) 00:26:21.816 28001.745 - 28120.902: 99.2911% ( 5) 00:26:21.816 28120.902 - 28240.058: 99.3200% ( 4) 00:26:21.816 28240.058 - 28359.215: 99.3490% ( 4) 00:26:21.816 28359.215 - 28478.371: 99.3779% ( 4) 00:26:21.816 28478.371 - 28597.527: 99.4068% ( 4) 00:26:21.816 28597.527 - 28716.684: 99.4358% ( 4) 00:26:21.816 28716.684 - 28835.840: 99.4647% ( 4) 00:26:21.816 28835.840 - 28954.996: 99.4936% ( 4) 00:26:21.816 28954.996 - 29074.153: 99.5226% ( 4) 00:26:21.816 29074.153 - 29193.309: 99.5370% ( 2) 00:26:21.816 34317.033 - 34555.345: 99.5804% ( 6) 00:26:21.816 34555.345 - 34793.658: 99.6455% ( 9) 00:26:21.816 34793.658 - 35031.971: 99.6889% ( 6) 00:26:21.816 35031.971 - 35270.284: 99.7396% ( 7) 00:26:21.816 35270.284 - 35508.596: 99.7975% ( 8) 00:26:21.816 35508.596 - 35746.909: 99.8553% ( 8) 00:26:21.816 35746.909 - 35985.222: 99.9204% ( 9) 00:26:21.816 35985.222 - 36223.535: 99.9783% ( 8) 00:26:21.816 36223.535 - 36461.847: 100.0000% ( 3) 00:26:21.816 00:26:21.816 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:26:21.816 ============================================================================== 00:26:21.816 Range in us Cumulative IO count 00:26:21.816 7626.007 - 7685.585: 0.0360% ( 5) 00:26:21.816 7685.585 - 7745.164: 0.2376% ( 28) 00:26:21.816 7745.164 - 7804.742: 0.5400% ( 42) 00:26:21.816 7804.742 - 7864.320: 1.1737% ( 88) 00:26:21.816 7864.320 - 7923.898: 2.2393% ( 148) 00:26:21.816 7923.898 - 7983.476: 3.9747% ( 241) 00:26:21.816 7983.476 - 8043.055: 6.2068% ( 310) 00:26:21.816 8043.055 - 8102.633: 8.7486% ( 353) 00:26:21.816 8102.633 - 8162.211: 11.7656% ( 419) 00:26:21.816 8162.211 - 8221.789: 14.8546% ( 429) 00:26:21.816 8221.789 - 8281.367: 18.1668% ( 460) 00:26:21.816 8281.367 - 8340.945: 21.6662% ( 486) 00:26:21.816 8340.945 - 8400.524: 25.1656% ( 486) 00:26:21.816 8400.524 - 8460.102: 28.7730% ( 501) 00:26:21.816 8460.102 - 8519.680: 32.5173% ( 520) 00:26:21.816 8519.680 - 8579.258: 36.2615% ( 520) 00:26:21.816 8579.258 - 8638.836: 40.0922% ( 532) 00:26:21.816 8638.836 - 8698.415: 43.8148% ( 517) 00:26:21.816 8698.415 - 8757.993: 47.6743% ( 536) 00:26:21.816 8757.993 - 8817.571: 51.5265% ( 535) 00:26:21.816 8817.571 - 8877.149: 55.3499% ( 531) 00:26:21.816 8877.149 - 8936.727: 58.9430% ( 499) 00:26:21.816 8936.727 - 8996.305: 62.4208% ( 483) 00:26:21.816 8996.305 - 9055.884: 65.5314% ( 432) 00:26:21.816 9055.884 - 9115.462: 68.2460% ( 377) 00:26:21.816 9115.462 - 9175.040: 70.6509% ( 334) 00:26:21.816 9175.040 - 9234.618: 72.7679% ( 294) 00:26:21.816 9234.618 - 9294.196: 74.7120% ( 270) 00:26:21.816 9294.196 - 9353.775: 76.5049% ( 249) 00:26:21.816 9353.775 - 9413.353: 78.1538% ( 229) 00:26:21.816 9413.353 - 9472.931: 79.6515% ( 208) 00:26:21.816 9472.931 - 9532.509: 81.0484% ( 194) 00:26:21.816 9532.509 - 9592.087: 82.3877% ( 186) 00:26:21.816 9592.087 - 9651.665: 83.6334% ( 173) 00:26:21.816 9651.665 - 9711.244: 84.8358% ( 167) 00:26:21.816 9711.244 - 9770.822: 85.9447% ( 154) 00:26:21.816 9770.822 - 9830.400: 86.9600% ( 141) 00:26:21.816 9830.400 - 9889.978: 87.8240% ( 120) 00:26:21.816 9889.978 - 9949.556: 88.5585% ( 102) 00:26:21.816 9949.556 - 10009.135: 89.2065% ( 90) 00:26:21.816 10009.135 - 10068.713: 89.7825% ( 80) 00:26:21.816 10068.713 - 10128.291: 90.2794% ( 69) 00:26:21.816 10128.291 - 10187.869: 90.7690% ( 68) 00:26:21.816 10187.869 - 10247.447: 91.1866% ( 58) 00:26:21.816 10247.447 - 10307.025: 91.5755% ( 54) 00:26:21.816 10307.025 - 10366.604: 91.9859% ( 57) 00:26:21.816 10366.604 - 10426.182: 92.3819% ( 55) 00:26:21.816 10426.182 - 10485.760: 92.7563% ( 52) 00:26:21.816 10485.760 - 10545.338: 93.0732% ( 44) 00:26:21.816 10545.338 - 10604.916: 93.2964% ( 31) 00:26:21.816 10604.916 - 10664.495: 93.5412% ( 34) 00:26:21.816 10664.495 - 10724.073: 93.7428% ( 28) 00:26:21.816 10724.073 - 10783.651: 93.9732% ( 32) 00:26:21.816 10783.651 - 10843.229: 94.1676% ( 27) 00:26:21.816 10843.229 - 10902.807: 94.3692% ( 28) 00:26:21.816 10902.807 - 10962.385: 94.5637% ( 27) 00:26:21.816 10962.385 - 11021.964: 94.7149% ( 21) 00:26:21.816 11021.964 - 11081.542: 94.8589% ( 20) 00:26:21.816 11081.542 - 11141.120: 94.9813% ( 17) 00:26:21.816 11141.120 - 11200.698: 95.1181% ( 19) 00:26:21.816 11200.698 - 11260.276: 95.2621% ( 20) 00:26:21.816 11260.276 - 11319.855: 95.3845% ( 17) 00:26:21.816 11319.855 - 11379.433: 95.5069% ( 17) 00:26:21.816 11379.433 - 11439.011: 95.5861% ( 11) 00:26:21.816 11439.011 - 11498.589: 95.6653% ( 11) 00:26:21.816 11498.589 - 11558.167: 95.7517% ( 12) 00:26:21.816 11558.167 - 11617.745: 95.8309% ( 11) 00:26:21.816 11617.745 - 11677.324: 95.8957% ( 9) 00:26:21.816 11677.324 - 11736.902: 95.9461% ( 7) 00:26:21.816 11736.902 - 11796.480: 95.9965% ( 7) 00:26:21.816 11796.480 - 11856.058: 96.0469% ( 7) 00:26:21.816 11856.058 - 11915.636: 96.0974% ( 7) 00:26:21.816 11915.636 - 11975.215: 96.1262% ( 4) 00:26:21.816 11975.215 - 12034.793: 96.1478% ( 3) 00:26:21.816 12034.793 - 12094.371: 96.1766% ( 4) 00:26:21.816 12094.371 - 12153.949: 96.1982% ( 3) 00:26:21.816 12153.949 - 12213.527: 96.2342% ( 5) 00:26:21.816 12213.527 - 12273.105: 96.2774% ( 6) 00:26:21.816 12273.105 - 12332.684: 96.3206% ( 6) 00:26:21.816 12332.684 - 12392.262: 96.3998% ( 11) 00:26:21.816 12392.262 - 12451.840: 96.4934% ( 13) 00:26:21.816 12451.840 - 12511.418: 96.5582% ( 9) 00:26:21.816 12511.418 - 12570.996: 96.6302% ( 10) 00:26:21.816 12570.996 - 12630.575: 96.6950% ( 9) 00:26:21.816 12630.575 - 12690.153: 96.7814% ( 12) 00:26:21.816 12690.153 - 12749.731: 96.8678% ( 12) 00:26:21.816 12749.731 - 12809.309: 96.9614% ( 13) 00:26:21.816 12809.309 - 12868.887: 97.0406% ( 11) 00:26:21.816 12868.887 - 12928.465: 97.1342% ( 13) 00:26:21.816 12928.465 - 12988.044: 97.2134% ( 11) 00:26:21.816 12988.044 - 13047.622: 97.2998% ( 12) 00:26:21.816 13047.622 - 13107.200: 97.4006% ( 14) 00:26:21.816 13107.200 - 13166.778: 97.5014% ( 14) 00:26:21.816 13166.778 - 13226.356: 97.6022% ( 14) 00:26:21.816 13226.356 - 13285.935: 97.7103% ( 15) 00:26:21.816 13285.935 - 13345.513: 97.7967% ( 12) 00:26:21.816 13345.513 - 13405.091: 97.9191% ( 17) 00:26:21.816 13405.091 - 13464.669: 98.0271% ( 15) 00:26:21.816 13464.669 - 13524.247: 98.0919% ( 9) 00:26:21.816 13524.247 - 13583.825: 98.1495% ( 8) 00:26:21.816 13583.825 - 13643.404: 98.2071% ( 8) 00:26:21.816 13643.404 - 13702.982: 98.2719% ( 9) 00:26:21.816 13702.982 - 13762.560: 98.3295% ( 8) 00:26:21.816 13762.560 - 13822.138: 98.3727% ( 6) 00:26:21.816 13822.138 - 13881.716: 98.3871% ( 2) 00:26:21.816 13881.716 - 13941.295: 98.4015% ( 2) 00:26:21.816 13941.295 - 14000.873: 98.4159% ( 2) 00:26:21.816 14000.873 - 14060.451: 98.4303% ( 2) 00:26:21.816 14060.451 - 14120.029: 98.4447% ( 2) 00:26:21.816 14120.029 - 14179.607: 98.4663% ( 3) 00:26:21.816 14179.607 - 14239.185: 98.4807% ( 2) 00:26:21.816 14239.185 - 14298.764: 98.5023% ( 3) 00:26:21.816 14298.764 - 14358.342: 98.5239% ( 3) 00:26:21.816 14358.342 - 14417.920: 98.5671% ( 6) 00:26:21.816 14417.920 - 14477.498: 98.6031% ( 5) 00:26:21.816 14477.498 - 14537.076: 98.6535% ( 7) 00:26:21.816 14537.076 - 14596.655: 98.6967% ( 6) 00:26:21.816 14596.655 - 14656.233: 98.7327% ( 5) 00:26:21.816 14656.233 - 14715.811: 98.7687% ( 5) 00:26:21.816 14715.811 - 14775.389: 98.7975% ( 4) 00:26:21.816 14775.389 - 14834.967: 98.8191% ( 3) 00:26:21.816 14834.967 - 14894.545: 98.8407% ( 3) 00:26:21.816 14894.545 - 14954.124: 98.8623% ( 3) 00:26:21.816 14954.124 - 15013.702: 98.8839% ( 3) 00:26:21.816 15013.702 - 15073.280: 98.9055% ( 3) 00:26:21.816 15073.280 - 15132.858: 98.9271% ( 3) 00:26:21.817 15132.858 - 15192.436: 98.9487% ( 3) 00:26:21.817 15192.436 - 15252.015: 98.9703% ( 3) 00:26:21.817 15252.015 - 15371.171: 99.0207% ( 7) 00:26:21.817 15371.171 - 15490.327: 99.0639% ( 6) 00:26:21.817 15490.327 - 15609.484: 99.0783% ( 2) 00:26:21.817 19660.800 - 19779.956: 99.0855% ( 1) 00:26:21.817 19779.956 - 19899.113: 99.1071% ( 3) 00:26:21.817 19899.113 - 20018.269: 99.1359% ( 4) 00:26:21.817 20018.269 - 20137.425: 99.1575% ( 3) 00:26:21.817 20137.425 - 20256.582: 99.1863% ( 4) 00:26:21.817 20256.582 - 20375.738: 99.2151% ( 4) 00:26:21.817 20375.738 - 20494.895: 99.2440% ( 4) 00:26:21.817 20494.895 - 20614.051: 99.2728% ( 4) 00:26:21.817 20614.051 - 20733.207: 99.2944% ( 3) 00:26:21.817 20733.207 - 20852.364: 99.3232% ( 4) 00:26:21.817 20852.364 - 20971.520: 99.3448% ( 3) 00:26:21.817 20971.520 - 21090.676: 99.3736% ( 4) 00:26:21.817 21090.676 - 21209.833: 99.4024% ( 4) 00:26:21.817 21209.833 - 21328.989: 99.4240% ( 3) 00:26:21.817 21328.989 - 21448.145: 99.4528% ( 4) 00:26:21.817 21448.145 - 21567.302: 99.4816% ( 4) 00:26:21.817 21567.302 - 21686.458: 99.5104% ( 4) 00:26:21.817 21686.458 - 21805.615: 99.5392% ( 4) 00:26:21.817 26452.713 - 26571.869: 99.5536% ( 2) 00:26:21.817 26571.869 - 26691.025: 99.5824% ( 4) 00:26:21.817 26691.025 - 26810.182: 99.6112% ( 4) 00:26:21.817 26810.182 - 26929.338: 99.6400% ( 4) 00:26:21.817 26929.338 - 27048.495: 99.6616% ( 3) 00:26:21.817 27048.495 - 27167.651: 99.6904% ( 4) 00:26:21.817 27167.651 - 27286.807: 99.7192% ( 4) 00:26:21.817 27286.807 - 27405.964: 99.7480% ( 4) 00:26:21.817 27405.964 - 27525.120: 99.7840% ( 5) 00:26:21.817 27525.120 - 27644.276: 99.8128% ( 4) 00:26:21.817 27644.276 - 27763.433: 99.8416% ( 4) 00:26:21.817 27763.433 - 27882.589: 99.8704% ( 4) 00:26:21.817 27882.589 - 28001.745: 99.8992% ( 4) 00:26:21.817 28001.745 - 28120.902: 99.9280% ( 4) 00:26:21.817 28120.902 - 28240.058: 99.9568% ( 4) 00:26:21.817 28240.058 - 28359.215: 99.9856% ( 4) 00:26:21.817 28359.215 - 28478.371: 100.0000% ( 2) 00:26:21.817 00:26:21.817 06:52:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:26:23.202 Initializing NVMe Controllers 00:26:23.202 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:23.202 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:23.202 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:23.202 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:23.202 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:23.202 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:26:23.202 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:26:23.202 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:26:23.202 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:26:23.202 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:26:23.202 Initialization complete. Launching workers. 00:26:23.202 ======================================================== 00:26:23.202 Latency(us) 00:26:23.202 Device Information : IOPS MiB/s Average min max 00:26:23.202 PCIE (0000:00:10.0) NSID 1 from core 0: 11319.28 132.65 11331.34 9203.08 44184.76 00:26:23.202 PCIE (0000:00:11.0) NSID 1 from core 0: 11319.28 132.65 11303.17 9335.82 41659.69 00:26:23.202 PCIE (0000:00:13.0) NSID 1 from core 0: 11319.28 132.65 11274.53 9259.63 39525.50 00:26:23.202 PCIE (0000:00:12.0) NSID 1 from core 0: 11319.28 132.65 11242.25 9203.54 36735.00 00:26:23.202 PCIE (0000:00:12.0) NSID 2 from core 0: 11319.28 132.65 11213.90 9249.86 33873.08 00:26:23.202 PCIE (0000:00:12.0) NSID 3 from core 0: 11319.28 132.65 11187.28 9403.18 31282.69 00:26:23.202 ======================================================== 00:26:23.202 Total : 67915.70 795.89 11258.74 9203.08 44184.76 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9472.931us 00:26:23.202 10.00000% : 9889.978us 00:26:23.202 25.00000% : 10247.447us 00:26:23.202 50.00000% : 10783.651us 00:26:23.202 75.00000% : 11498.589us 00:26:23.202 90.00000% : 13047.622us 00:26:23.202 95.00000% : 13881.716us 00:26:23.202 98.00000% : 15192.436us 00:26:23.202 99.00000% : 32410.531us 00:26:23.202 99.50000% : 41943.040us 00:26:23.202 99.90000% : 43849.542us 00:26:23.202 99.99000% : 44087.855us 00:26:23.202 99.99900% : 44326.167us 00:26:23.202 99.99990% : 44326.167us 00:26:23.202 99.99999% : 44326.167us 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9592.087us 00:26:23.202 10.00000% : 9949.556us 00:26:23.202 25.00000% : 10307.025us 00:26:23.202 50.00000% : 10724.073us 00:26:23.202 75.00000% : 11439.011us 00:26:23.202 90.00000% : 12988.044us 00:26:23.202 95.00000% : 13762.560us 00:26:23.202 98.00000% : 14834.967us 00:26:23.202 99.00000% : 30980.655us 00:26:23.202 99.50000% : 39321.600us 00:26:23.202 99.90000% : 41466.415us 00:26:23.202 99.99000% : 41704.727us 00:26:23.202 99.99900% : 41704.727us 00:26:23.202 99.99990% : 41704.727us 00:26:23.202 99.99999% : 41704.727us 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9592.087us 00:26:23.202 10.00000% : 9949.556us 00:26:23.202 25.00000% : 10247.447us 00:26:23.202 50.00000% : 10724.073us 00:26:23.202 75.00000% : 11379.433us 00:26:23.202 90.00000% : 12988.044us 00:26:23.202 95.00000% : 13822.138us 00:26:23.202 98.00000% : 14894.545us 00:26:23.202 99.00000% : 28954.996us 00:26:23.202 99.50000% : 37415.098us 00:26:23.202 99.90000% : 39321.600us 00:26:23.202 99.99000% : 39559.913us 00:26:23.202 99.99900% : 39559.913us 00:26:23.202 99.99990% : 39559.913us 00:26:23.202 99.99999% : 39559.913us 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9651.665us 00:26:23.202 10.00000% : 9949.556us 00:26:23.202 25.00000% : 10307.025us 00:26:23.202 50.00000% : 10724.073us 00:26:23.202 75.00000% : 11379.433us 00:26:23.202 90.00000% : 12988.044us 00:26:23.202 95.00000% : 13762.560us 00:26:23.202 98.00000% : 14834.967us 00:26:23.202 99.00000% : 26214.400us 00:26:23.202 99.50000% : 34555.345us 00:26:23.202 99.90000% : 36223.535us 00:26:23.202 99.99000% : 36700.160us 00:26:23.202 99.99900% : 36938.473us 00:26:23.202 99.99990% : 36938.473us 00:26:23.202 99.99999% : 36938.473us 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9592.087us 00:26:23.202 10.00000% : 9949.556us 00:26:23.202 25.00000% : 10307.025us 00:26:23.202 50.00000% : 10724.073us 00:26:23.202 75.00000% : 11439.011us 00:26:23.202 90.00000% : 12988.044us 00:26:23.202 95.00000% : 13822.138us 00:26:23.202 98.00000% : 14954.124us 00:26:23.202 99.00000% : 23712.116us 00:26:23.202 99.50000% : 31933.905us 00:26:23.202 99.90000% : 33602.095us 00:26:23.202 99.99000% : 34078.720us 00:26:23.202 99.99900% : 34078.720us 00:26:23.202 99.99990% : 34078.720us 00:26:23.202 99.99999% : 34078.720us 00:26:23.202 00:26:23.202 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:26:23.202 ================================================================================= 00:26:23.202 1.00000% : 9651.665us 00:26:23.202 10.00000% : 10009.135us 00:26:23.202 25.00000% : 10307.025us 00:26:23.202 50.00000% : 10724.073us 00:26:23.202 75.00000% : 11439.011us 00:26:23.202 90.00000% : 13047.622us 00:26:23.202 95.00000% : 13762.560us 00:26:23.202 98.00000% : 14894.545us 00:26:23.202 99.00000% : 21448.145us 00:26:23.202 99.50000% : 29312.465us 00:26:23.202 99.90000% : 30980.655us 00:26:23.202 99.99000% : 31457.280us 00:26:23.202 99.99900% : 31457.280us 00:26:23.202 99.99990% : 31457.280us 00:26:23.202 99.99999% : 31457.280us 00:26:23.202 00:26:23.202 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:26:23.202 ============================================================================== 00:26:23.202 Range in us Cumulative IO count 00:26:23.202 9175.040 - 9234.618: 0.0265% ( 3) 00:26:23.202 9234.618 - 9294.196: 0.0971% ( 8) 00:26:23.202 9294.196 - 9353.775: 0.3884% ( 33) 00:26:23.202 9353.775 - 9413.353: 0.8475% ( 52) 00:26:23.202 9413.353 - 9472.931: 1.5007% ( 74) 00:26:23.202 9472.931 - 9532.509: 2.3923% ( 101) 00:26:23.202 9532.509 - 9592.087: 3.4605% ( 121) 00:26:23.202 9592.087 - 9651.665: 4.6875% ( 139) 00:26:23.202 9651.665 - 9711.244: 5.9234% ( 140) 00:26:23.202 9711.244 - 9770.822: 7.3623% ( 163) 00:26:23.202 9770.822 - 9830.400: 9.0042% ( 186) 00:26:23.202 9830.400 - 9889.978: 11.0611% ( 233) 00:26:23.202 9889.978 - 9949.556: 13.4181% ( 267) 00:26:23.202 9949.556 - 10009.135: 16.0752% ( 301) 00:26:23.202 10009.135 - 10068.713: 18.4410% ( 268) 00:26:23.202 10068.713 - 10128.291: 20.8069% ( 268) 00:26:23.202 10128.291 - 10187.869: 23.3757% ( 291) 00:26:23.202 10187.869 - 10247.447: 26.1653% ( 316) 00:26:23.202 10247.447 - 10307.025: 28.9636% ( 317) 00:26:23.202 10307.025 - 10366.604: 31.8768% ( 330) 00:26:23.202 10366.604 - 10426.182: 34.7987% ( 331) 00:26:23.202 10426.182 - 10485.760: 37.7119% ( 330) 00:26:23.202 10485.760 - 10545.338: 40.5456% ( 321) 00:26:23.202 10545.338 - 10604.916: 43.6970% ( 357) 00:26:23.202 10604.916 - 10664.495: 46.6190% ( 331) 00:26:23.202 10664.495 - 10724.073: 49.7617% ( 356) 00:26:23.202 10724.073 - 10783.651: 52.6660% ( 329) 00:26:23.202 10783.651 - 10843.229: 55.4025% ( 310) 00:26:23.202 10843.229 - 10902.807: 57.9714% ( 291) 00:26:23.202 10902.807 - 10962.385: 60.4961% ( 286) 00:26:23.202 10962.385 - 11021.964: 62.7913% ( 260) 00:26:23.202 11021.964 - 11081.542: 64.9011% ( 239) 00:26:23.202 11081.542 - 11141.120: 66.8167% ( 217) 00:26:23.202 11141.120 - 11200.698: 68.7059% ( 214) 00:26:23.202 11200.698 - 11260.276: 70.5155% ( 205) 00:26:23.202 11260.276 - 11319.855: 72.1222% ( 182) 00:26:23.203 11319.855 - 11379.433: 73.7023% ( 179) 00:26:23.203 11379.433 - 11439.011: 74.8499% ( 130) 00:26:23.203 11439.011 - 11498.589: 75.9710% ( 127) 00:26:23.203 11498.589 - 11558.167: 76.9951% ( 116) 00:26:23.203 11558.167 - 11617.745: 77.8513% ( 97) 00:26:23.203 11617.745 - 11677.324: 78.7165% ( 98) 00:26:23.203 11677.324 - 11736.902: 79.4933% ( 88) 00:26:23.203 11736.902 - 11796.480: 80.2260% ( 83) 00:26:23.203 11796.480 - 11856.058: 80.9763% ( 85) 00:26:23.203 11856.058 - 11915.636: 81.5148% ( 61) 00:26:23.203 11915.636 - 11975.215: 82.0180% ( 57) 00:26:23.203 11975.215 - 12034.793: 82.5653% ( 62) 00:26:23.203 12034.793 - 12094.371: 83.1656% ( 68) 00:26:23.203 12094.371 - 12153.949: 83.6600% ( 56) 00:26:23.203 12153.949 - 12213.527: 84.2956% ( 72) 00:26:23.203 12213.527 - 12273.105: 84.8076% ( 58) 00:26:23.203 12273.105 - 12332.684: 85.3107% ( 57) 00:26:23.203 12332.684 - 12392.262: 85.8227% ( 58) 00:26:23.203 12392.262 - 12451.840: 86.1670% ( 39) 00:26:23.203 12451.840 - 12511.418: 86.5643% ( 45) 00:26:23.203 12511.418 - 12570.996: 86.9085% ( 39) 00:26:23.203 12570.996 - 12630.575: 87.3941% ( 55) 00:26:23.203 12630.575 - 12690.153: 87.7383% ( 39) 00:26:23.203 12690.153 - 12749.731: 88.1974% ( 52) 00:26:23.203 12749.731 - 12809.309: 88.5505% ( 40) 00:26:23.203 12809.309 - 12868.887: 88.8683% ( 36) 00:26:23.203 12868.887 - 12928.465: 89.2391% ( 42) 00:26:23.203 12928.465 - 12988.044: 89.7334% ( 56) 00:26:23.203 12988.044 - 13047.622: 90.1748% ( 50) 00:26:23.203 13047.622 - 13107.200: 90.6250% ( 51) 00:26:23.203 13107.200 - 13166.778: 91.0134% ( 44) 00:26:23.203 13166.778 - 13226.356: 91.3842% ( 42) 00:26:23.203 13226.356 - 13285.935: 91.7638% ( 43) 00:26:23.203 13285.935 - 13345.513: 92.1169% ( 40) 00:26:23.203 13345.513 - 13405.091: 92.5406% ( 48) 00:26:23.203 13405.091 - 13464.669: 92.9290% ( 44) 00:26:23.203 13464.669 - 13524.247: 93.3086% ( 43) 00:26:23.203 13524.247 - 13583.825: 93.6264% ( 36) 00:26:23.203 13583.825 - 13643.404: 93.9795% ( 40) 00:26:23.203 13643.404 - 13702.982: 94.3503% ( 42) 00:26:23.203 13702.982 - 13762.560: 94.6857% ( 38) 00:26:23.203 13762.560 - 13822.138: 94.9947% ( 35) 00:26:23.203 13822.138 - 13881.716: 95.3037% ( 35) 00:26:23.203 13881.716 - 13941.295: 95.5508% ( 28) 00:26:23.203 13941.295 - 14000.873: 95.7715% ( 25) 00:26:23.203 14000.873 - 14060.451: 95.9922% ( 25) 00:26:23.203 14060.451 - 14120.029: 96.2041% ( 24) 00:26:23.203 14120.029 - 14179.607: 96.3718% ( 19) 00:26:23.203 14179.607 - 14239.185: 96.5925% ( 25) 00:26:23.203 14239.185 - 14298.764: 96.7073% ( 13) 00:26:23.203 14298.764 - 14358.342: 96.8132% ( 12) 00:26:23.203 14358.342 - 14417.920: 96.9191% ( 12) 00:26:23.203 14417.920 - 14477.498: 97.0074% ( 10) 00:26:23.203 14477.498 - 14537.076: 97.1928% ( 21) 00:26:23.203 14537.076 - 14596.655: 97.2899% ( 11) 00:26:23.203 14596.655 - 14656.233: 97.3605% ( 8) 00:26:23.203 14656.233 - 14715.811: 97.4753% ( 13) 00:26:23.203 14715.811 - 14775.389: 97.5547% ( 9) 00:26:23.203 14775.389 - 14834.967: 97.6607% ( 12) 00:26:23.203 14834.967 - 14894.545: 97.7401% ( 9) 00:26:23.203 14894.545 - 14954.124: 97.8196% ( 9) 00:26:23.203 14954.124 - 15013.702: 97.8990% ( 9) 00:26:23.203 15013.702 - 15073.280: 97.9520% ( 6) 00:26:23.203 15073.280 - 15132.858: 97.9961% ( 5) 00:26:23.203 15132.858 - 15192.436: 98.0491% ( 6) 00:26:23.203 15192.436 - 15252.015: 98.0756% ( 3) 00:26:23.203 15252.015 - 15371.171: 98.1638% ( 10) 00:26:23.203 15371.171 - 15490.327: 98.2609% ( 11) 00:26:23.203 15490.327 - 15609.484: 98.3934% ( 15) 00:26:23.203 15609.484 - 15728.640: 98.4728% ( 9) 00:26:23.203 15728.640 - 15847.796: 98.5169% ( 5) 00:26:23.203 15847.796 - 15966.953: 98.5523% ( 4) 00:26:23.203 15966.953 - 16086.109: 98.6141% ( 7) 00:26:23.203 16086.109 - 16205.265: 98.6758% ( 7) 00:26:23.203 16205.265 - 16324.422: 98.7023% ( 3) 00:26:23.203 16324.422 - 16443.578: 98.7553% ( 6) 00:26:23.203 16443.578 - 16562.735: 98.7994% ( 5) 00:26:23.203 16562.735 - 16681.891: 98.8524% ( 6) 00:26:23.203 16681.891 - 16801.047: 98.8701% ( 2) 00:26:23.203 31695.593 - 31933.905: 98.9407% ( 8) 00:26:23.203 31933.905 - 32172.218: 98.9672% ( 3) 00:26:23.203 32172.218 - 32410.531: 99.0113% ( 5) 00:26:23.203 32410.531 - 32648.844: 99.0554% ( 5) 00:26:23.203 32648.844 - 32887.156: 99.0996% ( 5) 00:26:23.203 32887.156 - 33125.469: 99.1525% ( 6) 00:26:23.203 33125.469 - 33363.782: 99.1879% ( 4) 00:26:23.203 33363.782 - 33602.095: 99.2320% ( 5) 00:26:23.203 33602.095 - 33840.407: 99.2673% ( 4) 00:26:23.203 33840.407 - 34078.720: 99.3114% ( 5) 00:26:23.203 34078.720 - 34317.033: 99.3556% ( 5) 00:26:23.203 34317.033 - 34555.345: 99.3997% ( 5) 00:26:23.203 34555.345 - 34793.658: 99.4350% ( 4) 00:26:23.203 41466.415 - 41704.727: 99.4615% ( 3) 00:26:23.203 41704.727 - 41943.040: 99.5056% ( 5) 00:26:23.203 41943.040 - 42181.353: 99.5674% ( 7) 00:26:23.203 42181.353 - 42419.665: 99.6204% ( 6) 00:26:23.203 42419.665 - 42657.978: 99.6645% ( 5) 00:26:23.203 42657.978 - 42896.291: 99.7263% ( 7) 00:26:23.203 42896.291 - 43134.604: 99.7793% ( 6) 00:26:23.203 43134.604 - 43372.916: 99.8323% ( 6) 00:26:23.203 43372.916 - 43611.229: 99.8764% ( 5) 00:26:23.203 43611.229 - 43849.542: 99.9382% ( 7) 00:26:23.203 43849.542 - 44087.855: 99.9912% ( 6) 00:26:23.203 44087.855 - 44326.167: 100.0000% ( 1) 00:26:23.203 00:26:23.203 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:26:23.203 ============================================================================== 00:26:23.203 Range in us Cumulative IO count 00:26:23.203 9294.196 - 9353.775: 0.0265% ( 3) 00:26:23.203 9353.775 - 9413.353: 0.1324% ( 12) 00:26:23.203 9413.353 - 9472.931: 0.2472% ( 13) 00:26:23.203 9472.931 - 9532.509: 0.5385% ( 33) 00:26:23.203 9532.509 - 9592.087: 1.1476% ( 69) 00:26:23.203 9592.087 - 9651.665: 1.9774% ( 94) 00:26:23.203 9651.665 - 9711.244: 3.1603% ( 134) 00:26:23.203 9711.244 - 9770.822: 4.5992% ( 163) 00:26:23.203 9770.822 - 9830.400: 6.4972% ( 215) 00:26:23.203 9830.400 - 9889.978: 8.7747% ( 258) 00:26:23.203 9889.978 - 9949.556: 11.1758% ( 272) 00:26:23.203 9949.556 - 10009.135: 13.2945% ( 240) 00:26:23.203 10009.135 - 10068.713: 15.8457% ( 289) 00:26:23.203 10068.713 - 10128.291: 18.4940% ( 300) 00:26:23.203 10128.291 - 10187.869: 21.3100% ( 319) 00:26:23.203 10187.869 - 10247.447: 24.5145% ( 363) 00:26:23.203 10247.447 - 10307.025: 28.0191% ( 397) 00:26:23.203 10307.025 - 10366.604: 31.3824% ( 381) 00:26:23.203 10366.604 - 10426.182: 34.7546% ( 382) 00:26:23.203 10426.182 - 10485.760: 38.2150% ( 392) 00:26:23.203 10485.760 - 10545.338: 41.6931% ( 394) 00:26:23.203 10545.338 - 10604.916: 45.3125% ( 410) 00:26:23.203 10604.916 - 10664.495: 48.7730% ( 392) 00:26:23.203 10664.495 - 10724.073: 51.9068% ( 355) 00:26:23.203 10724.073 - 10783.651: 54.8552% ( 334) 00:26:23.203 10783.651 - 10843.229: 57.5830% ( 309) 00:26:23.203 10843.229 - 10902.807: 59.9929% ( 273) 00:26:23.203 10902.807 - 10962.385: 62.2528% ( 256) 00:26:23.203 10962.385 - 11021.964: 64.3362% ( 236) 00:26:23.203 11021.964 - 11081.542: 66.4371% ( 238) 00:26:23.203 11081.542 - 11141.120: 68.4763% ( 231) 00:26:23.203 11141.120 - 11200.698: 70.1271% ( 187) 00:26:23.203 11200.698 - 11260.276: 71.4778% ( 153) 00:26:23.203 11260.276 - 11319.855: 72.7931% ( 149) 00:26:23.203 11319.855 - 11379.433: 73.9495% ( 131) 00:26:23.203 11379.433 - 11439.011: 75.1236% ( 133) 00:26:23.203 11439.011 - 11498.589: 76.1388% ( 115) 00:26:23.203 11498.589 - 11558.167: 76.9509% ( 92) 00:26:23.203 11558.167 - 11617.745: 77.7013% ( 85) 00:26:23.203 11617.745 - 11677.324: 78.4693% ( 87) 00:26:23.203 11677.324 - 11736.902: 79.2903% ( 93) 00:26:23.203 11736.902 - 11796.480: 79.9170% ( 71) 00:26:23.203 11796.480 - 11856.058: 80.5968% ( 77) 00:26:23.203 11856.058 - 11915.636: 81.2235% ( 71) 00:26:23.203 11915.636 - 11975.215: 81.8591% ( 72) 00:26:23.203 11975.215 - 12034.793: 82.6006% ( 84) 00:26:23.203 12034.793 - 12094.371: 83.2451% ( 73) 00:26:23.203 12094.371 - 12153.949: 83.8806% ( 72) 00:26:23.203 12153.949 - 12213.527: 84.4191% ( 61) 00:26:23.203 12213.527 - 12273.105: 84.8605% ( 50) 00:26:23.203 12273.105 - 12332.684: 85.4343% ( 65) 00:26:23.203 12332.684 - 12392.262: 85.8845% ( 51) 00:26:23.203 12392.262 - 12451.840: 86.3259% ( 50) 00:26:23.203 12451.840 - 12511.418: 86.7320% ( 46) 00:26:23.203 12511.418 - 12570.996: 87.0674% ( 38) 00:26:23.203 12570.996 - 12630.575: 87.4117% ( 39) 00:26:23.203 12630.575 - 12690.153: 87.7913% ( 43) 00:26:23.203 12690.153 - 12749.731: 88.1179% ( 37) 00:26:23.203 12749.731 - 12809.309: 88.5505% ( 49) 00:26:23.203 12809.309 - 12868.887: 89.0890% ( 61) 00:26:23.203 12868.887 - 12928.465: 89.5922% ( 57) 00:26:23.203 12928.465 - 12988.044: 90.0512% ( 52) 00:26:23.203 12988.044 - 13047.622: 90.4396% ( 44) 00:26:23.203 13047.622 - 13107.200: 90.8369% ( 45) 00:26:23.203 13107.200 - 13166.778: 91.2606% ( 48) 00:26:23.203 13166.778 - 13226.356: 91.6755% ( 47) 00:26:23.203 13226.356 - 13285.935: 92.0904% ( 47) 00:26:23.203 13285.935 - 13345.513: 92.4876% ( 45) 00:26:23.203 13345.513 - 13405.091: 92.8761% ( 44) 00:26:23.203 13405.091 - 13464.669: 93.2203% ( 39) 00:26:23.203 13464.669 - 13524.247: 93.5823% ( 41) 00:26:23.203 13524.247 - 13583.825: 93.9442% ( 41) 00:26:23.203 13583.825 - 13643.404: 94.3150% ( 42) 00:26:23.203 13643.404 - 13702.982: 94.7034% ( 44) 00:26:23.203 13702.982 - 13762.560: 95.0830% ( 43) 00:26:23.203 13762.560 - 13822.138: 95.4096% ( 37) 00:26:23.203 13822.138 - 13881.716: 95.6833% ( 31) 00:26:23.203 13881.716 - 13941.295: 95.9304% ( 28) 00:26:23.203 13941.295 - 14000.873: 96.1423% ( 24) 00:26:23.203 14000.873 - 14060.451: 96.3277% ( 21) 00:26:23.204 14060.451 - 14120.029: 96.5042% ( 20) 00:26:23.204 14120.029 - 14179.607: 96.6455% ( 16) 00:26:23.204 14179.607 - 14239.185: 96.7691% ( 14) 00:26:23.204 14239.185 - 14298.764: 96.9191% ( 17) 00:26:23.204 14298.764 - 14358.342: 97.0427% ( 14) 00:26:23.204 14358.342 - 14417.920: 97.1840% ( 16) 00:26:23.204 14417.920 - 14477.498: 97.3517% ( 19) 00:26:23.204 14477.498 - 14537.076: 97.4488% ( 11) 00:26:23.204 14537.076 - 14596.655: 97.5636% ( 13) 00:26:23.204 14596.655 - 14656.233: 97.7048% ( 16) 00:26:23.204 14656.233 - 14715.811: 97.8460% ( 16) 00:26:23.204 14715.811 - 14775.389: 97.9520% ( 12) 00:26:23.204 14775.389 - 14834.967: 98.0667% ( 13) 00:26:23.204 14834.967 - 14894.545: 98.1550% ( 10) 00:26:23.204 14894.545 - 14954.124: 98.2345% ( 9) 00:26:23.204 14954.124 - 15013.702: 98.2963% ( 7) 00:26:23.204 15013.702 - 15073.280: 98.3492% ( 6) 00:26:23.204 15073.280 - 15132.858: 98.4022% ( 6) 00:26:23.204 15132.858 - 15192.436: 98.4552% ( 6) 00:26:23.204 15192.436 - 15252.015: 98.5169% ( 7) 00:26:23.204 15252.015 - 15371.171: 98.6229% ( 12) 00:26:23.204 15371.171 - 15490.327: 98.7112% ( 10) 00:26:23.204 15490.327 - 15609.484: 98.7553% ( 5) 00:26:23.204 15609.484 - 15728.640: 98.8083% ( 6) 00:26:23.204 15728.640 - 15847.796: 98.8701% ( 7) 00:26:23.204 30265.716 - 30384.873: 98.8877% ( 2) 00:26:23.204 30384.873 - 30504.029: 98.9142% ( 3) 00:26:23.204 30504.029 - 30742.342: 98.9672% ( 6) 00:26:23.204 30742.342 - 30980.655: 99.0201% ( 6) 00:26:23.204 30980.655 - 31218.967: 99.0731% ( 6) 00:26:23.204 31218.967 - 31457.280: 99.1261% ( 6) 00:26:23.204 31457.280 - 31695.593: 99.1790% ( 6) 00:26:23.204 31695.593 - 31933.905: 99.2320% ( 6) 00:26:23.204 31933.905 - 32172.218: 99.2850% ( 6) 00:26:23.204 32172.218 - 32410.531: 99.3379% ( 6) 00:26:23.204 32410.531 - 32648.844: 99.3821% ( 5) 00:26:23.204 32648.844 - 32887.156: 99.4350% ( 6) 00:26:23.204 38606.662 - 38844.975: 99.4527% ( 2) 00:26:23.204 38844.975 - 39083.287: 99.4968% ( 5) 00:26:23.204 39083.287 - 39321.600: 99.5233% ( 3) 00:26:23.204 39321.600 - 39559.913: 99.5674% ( 5) 00:26:23.204 39559.913 - 39798.225: 99.6028% ( 4) 00:26:23.204 39798.225 - 40036.538: 99.6381% ( 4) 00:26:23.204 40036.538 - 40274.851: 99.6822% ( 5) 00:26:23.204 40274.851 - 40513.164: 99.7263% ( 5) 00:26:23.204 40513.164 - 40751.476: 99.7793% ( 6) 00:26:23.204 40751.476 - 40989.789: 99.8323% ( 6) 00:26:23.204 40989.789 - 41228.102: 99.8941% ( 7) 00:26:23.204 41228.102 - 41466.415: 99.9470% ( 6) 00:26:23.204 41466.415 - 41704.727: 100.0000% ( 6) 00:26:23.204 00:26:23.204 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:26:23.204 ============================================================================== 00:26:23.204 Range in us Cumulative IO count 00:26:23.204 9234.618 - 9294.196: 0.0177% ( 2) 00:26:23.204 9294.196 - 9353.775: 0.0883% ( 8) 00:26:23.204 9353.775 - 9413.353: 0.1677% ( 9) 00:26:23.204 9413.353 - 9472.931: 0.3178% ( 17) 00:26:23.204 9472.931 - 9532.509: 0.7239% ( 46) 00:26:23.204 9532.509 - 9592.087: 1.4654% ( 84) 00:26:23.204 9592.087 - 9651.665: 2.4276% ( 109) 00:26:23.204 9651.665 - 9711.244: 3.5752% ( 130) 00:26:23.204 9711.244 - 9770.822: 5.1377% ( 177) 00:26:23.204 9770.822 - 9830.400: 6.6737% ( 174) 00:26:23.204 9830.400 - 9889.978: 8.5364% ( 211) 00:26:23.204 9889.978 - 9949.556: 10.5579% ( 229) 00:26:23.204 9949.556 - 10009.135: 12.7560% ( 249) 00:26:23.204 10009.135 - 10068.713: 15.3249% ( 291) 00:26:23.204 10068.713 - 10128.291: 18.2203% ( 328) 00:26:23.204 10128.291 - 10187.869: 21.6720% ( 391) 00:26:23.204 10187.869 - 10247.447: 25.2560% ( 406) 00:26:23.204 10247.447 - 10307.025: 28.6194% ( 381) 00:26:23.204 10307.025 - 10366.604: 31.9827% ( 381) 00:26:23.204 10366.604 - 10426.182: 35.4696% ( 395) 00:26:23.204 10426.182 - 10485.760: 39.0184% ( 402) 00:26:23.204 10485.760 - 10545.338: 42.3905% ( 382) 00:26:23.204 10545.338 - 10604.916: 45.7980% ( 386) 00:26:23.204 10604.916 - 10664.495: 48.8612% ( 347) 00:26:23.204 10664.495 - 10724.073: 51.7655% ( 329) 00:26:23.204 10724.073 - 10783.651: 54.5463% ( 315) 00:26:23.204 10783.651 - 10843.229: 57.3181% ( 314) 00:26:23.204 10843.229 - 10902.807: 59.8694% ( 289) 00:26:23.204 10902.807 - 10962.385: 62.2793% ( 273) 00:26:23.204 10962.385 - 11021.964: 64.5922% ( 262) 00:26:23.204 11021.964 - 11081.542: 66.8344% ( 254) 00:26:23.204 11081.542 - 11141.120: 69.0148% ( 247) 00:26:23.204 11141.120 - 11200.698: 70.8951% ( 213) 00:26:23.204 11200.698 - 11260.276: 72.5636% ( 189) 00:26:23.204 11260.276 - 11319.855: 73.9230% ( 154) 00:26:23.204 11319.855 - 11379.433: 75.0794% ( 131) 00:26:23.204 11379.433 - 11439.011: 76.1564% ( 122) 00:26:23.204 11439.011 - 11498.589: 77.1363% ( 111) 00:26:23.204 11498.589 - 11558.167: 77.9838% ( 96) 00:26:23.204 11558.167 - 11617.745: 78.6811% ( 79) 00:26:23.204 11617.745 - 11677.324: 79.3609% ( 77) 00:26:23.204 11677.324 - 11736.902: 79.9788% ( 70) 00:26:23.204 11736.902 - 11796.480: 80.5968% ( 70) 00:26:23.204 11796.480 - 11856.058: 81.2500% ( 74) 00:26:23.204 11856.058 - 11915.636: 81.8768% ( 71) 00:26:23.204 11915.636 - 11975.215: 82.5477% ( 76) 00:26:23.204 11975.215 - 12034.793: 83.1833% ( 72) 00:26:23.204 12034.793 - 12094.371: 83.8189% ( 72) 00:26:23.204 12094.371 - 12153.949: 84.2867% ( 53) 00:26:23.204 12153.949 - 12213.527: 84.6663% ( 43) 00:26:23.204 12213.527 - 12273.105: 85.0459% ( 43) 00:26:23.204 12273.105 - 12332.684: 85.3990% ( 40) 00:26:23.204 12332.684 - 12392.262: 85.7256% ( 37) 00:26:23.204 12392.262 - 12451.840: 86.0346% ( 35) 00:26:23.204 12451.840 - 12511.418: 86.3347% ( 34) 00:26:23.204 12511.418 - 12570.996: 86.7320% ( 45) 00:26:23.204 12570.996 - 12630.575: 87.1999% ( 53) 00:26:23.204 12630.575 - 12690.153: 87.7207% ( 59) 00:26:23.204 12690.153 - 12749.731: 88.2239% ( 57) 00:26:23.204 12749.731 - 12809.309: 88.6653% ( 50) 00:26:23.204 12809.309 - 12868.887: 89.1773% ( 58) 00:26:23.204 12868.887 - 12928.465: 89.7511% ( 65) 00:26:23.204 12928.465 - 12988.044: 90.2013% ( 51) 00:26:23.204 12988.044 - 13047.622: 90.6780% ( 54) 00:26:23.204 13047.622 - 13107.200: 91.1194% ( 50) 00:26:23.204 13107.200 - 13166.778: 91.4636% ( 39) 00:26:23.204 13166.778 - 13226.356: 91.8256% ( 41) 00:26:23.204 13226.356 - 13285.935: 92.1875% ( 41) 00:26:23.204 13285.935 - 13345.513: 92.5406% ( 40) 00:26:23.204 13345.513 - 13405.091: 92.8584% ( 36) 00:26:23.204 13405.091 - 13464.669: 93.1585% ( 34) 00:26:23.204 13464.669 - 13524.247: 93.4852% ( 37) 00:26:23.204 13524.247 - 13583.825: 93.7853% ( 34) 00:26:23.204 13583.825 - 13643.404: 94.0325% ( 28) 00:26:23.204 13643.404 - 13702.982: 94.3415% ( 35) 00:26:23.204 13702.982 - 13762.560: 94.6593% ( 36) 00:26:23.204 13762.560 - 13822.138: 95.0124% ( 40) 00:26:23.204 13822.138 - 13881.716: 95.3125% ( 34) 00:26:23.204 13881.716 - 13941.295: 95.6391% ( 37) 00:26:23.204 13941.295 - 14000.873: 95.9393% ( 34) 00:26:23.204 14000.873 - 14060.451: 96.1776% ( 27) 00:26:23.204 14060.451 - 14120.029: 96.3806% ( 23) 00:26:23.204 14120.029 - 14179.607: 96.5660% ( 21) 00:26:23.204 14179.607 - 14239.185: 96.7073% ( 16) 00:26:23.204 14239.185 - 14298.764: 96.8573% ( 17) 00:26:23.204 14298.764 - 14358.342: 96.9986% ( 16) 00:26:23.204 14358.342 - 14417.920: 97.1045% ( 12) 00:26:23.204 14417.920 - 14477.498: 97.2105% ( 12) 00:26:23.204 14477.498 - 14537.076: 97.3340% ( 14) 00:26:23.204 14537.076 - 14596.655: 97.4665% ( 15) 00:26:23.204 14596.655 - 14656.233: 97.6342% ( 19) 00:26:23.204 14656.233 - 14715.811: 97.7666% ( 15) 00:26:23.204 14715.811 - 14775.389: 97.8990% ( 15) 00:26:23.204 14775.389 - 14834.967: 97.9608% ( 7) 00:26:23.204 14834.967 - 14894.545: 98.0314% ( 8) 00:26:23.204 14894.545 - 14954.124: 98.0756% ( 5) 00:26:23.204 14954.124 - 15013.702: 98.1109% ( 4) 00:26:23.204 15013.702 - 15073.280: 98.1727% ( 7) 00:26:23.204 15073.280 - 15132.858: 98.2256% ( 6) 00:26:23.204 15132.858 - 15192.436: 98.2786% ( 6) 00:26:23.204 15192.436 - 15252.015: 98.3316% ( 6) 00:26:23.204 15252.015 - 15371.171: 98.4198% ( 10) 00:26:23.204 15371.171 - 15490.327: 98.4993% ( 9) 00:26:23.204 15490.327 - 15609.484: 98.5523% ( 6) 00:26:23.204 15609.484 - 15728.640: 98.5876% ( 4) 00:26:23.204 15728.640 - 15847.796: 98.6229% ( 4) 00:26:23.204 15847.796 - 15966.953: 98.6670% ( 5) 00:26:23.204 15966.953 - 16086.109: 98.7112% ( 5) 00:26:23.204 16086.109 - 16205.265: 98.7553% ( 5) 00:26:23.204 16205.265 - 16324.422: 98.8083% ( 6) 00:26:23.204 16324.422 - 16443.578: 98.8524% ( 5) 00:26:23.204 16443.578 - 16562.735: 98.8701% ( 2) 00:26:23.204 28240.058 - 28359.215: 98.8789% ( 1) 00:26:23.204 28359.215 - 28478.371: 98.8965% ( 2) 00:26:23.204 28478.371 - 28597.527: 98.9230% ( 3) 00:26:23.204 28597.527 - 28716.684: 98.9583% ( 4) 00:26:23.204 28716.684 - 28835.840: 98.9848% ( 3) 00:26:23.204 28835.840 - 28954.996: 99.0113% ( 3) 00:26:23.204 28954.996 - 29074.153: 99.0378% ( 3) 00:26:23.204 29074.153 - 29193.309: 99.0643% ( 3) 00:26:23.204 29193.309 - 29312.465: 99.0819% ( 2) 00:26:23.204 29312.465 - 29431.622: 99.0996% ( 2) 00:26:23.204 29431.622 - 29550.778: 99.1172% ( 2) 00:26:23.204 29550.778 - 29669.935: 99.1349% ( 2) 00:26:23.204 29669.935 - 29789.091: 99.1525% ( 2) 00:26:23.204 29789.091 - 29908.247: 99.1790% ( 3) 00:26:23.204 29908.247 - 30027.404: 99.2055% ( 3) 00:26:23.204 30027.404 - 30146.560: 99.2232% ( 2) 00:26:23.204 30146.560 - 30265.716: 99.2408% ( 2) 00:26:23.204 30265.716 - 30384.873: 99.2585% ( 2) 00:26:23.204 30384.873 - 30504.029: 99.2761% ( 2) 00:26:23.204 30504.029 - 30742.342: 99.3291% ( 6) 00:26:23.204 30742.342 - 30980.655: 99.3821% ( 6) 00:26:23.204 30980.655 - 31218.967: 99.4350% ( 6) 00:26:23.204 36938.473 - 37176.785: 99.4615% ( 3) 00:26:23.204 37176.785 - 37415.098: 99.5056% ( 5) 00:26:23.205 37415.098 - 37653.411: 99.5674% ( 7) 00:26:23.205 37653.411 - 37891.724: 99.6204% ( 6) 00:26:23.205 37891.724 - 38130.036: 99.6645% ( 5) 00:26:23.205 38130.036 - 38368.349: 99.7175% ( 6) 00:26:23.205 38368.349 - 38606.662: 99.7705% ( 6) 00:26:23.205 38606.662 - 38844.975: 99.8234% ( 6) 00:26:23.205 38844.975 - 39083.287: 99.8852% ( 7) 00:26:23.205 39083.287 - 39321.600: 99.9470% ( 7) 00:26:23.205 39321.600 - 39559.913: 100.0000% ( 6) 00:26:23.205 00:26:23.205 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:26:23.205 ============================================================================== 00:26:23.205 Range in us Cumulative IO count 00:26:23.205 9175.040 - 9234.618: 0.0441% ( 5) 00:26:23.205 9234.618 - 9294.196: 0.1148% ( 8) 00:26:23.205 9294.196 - 9353.775: 0.1677% ( 6) 00:26:23.205 9353.775 - 9413.353: 0.1766% ( 1) 00:26:23.205 9413.353 - 9472.931: 0.2295% ( 6) 00:26:23.205 9472.931 - 9532.509: 0.3884% ( 18) 00:26:23.205 9532.509 - 9592.087: 0.8298% ( 50) 00:26:23.205 9592.087 - 9651.665: 1.9509% ( 127) 00:26:23.205 9651.665 - 9711.244: 3.2398% ( 146) 00:26:23.205 9711.244 - 9770.822: 4.7669% ( 173) 00:26:23.205 9770.822 - 9830.400: 6.6031% ( 208) 00:26:23.205 9830.400 - 9889.978: 8.5275% ( 218) 00:26:23.205 9889.978 - 9949.556: 10.7080% ( 247) 00:26:23.205 9949.556 - 10009.135: 13.0650% ( 267) 00:26:23.205 10009.135 - 10068.713: 15.4838% ( 274) 00:26:23.205 10068.713 - 10128.291: 17.9555% ( 280) 00:26:23.205 10128.291 - 10187.869: 21.0099% ( 346) 00:26:23.205 10187.869 - 10247.447: 24.2850% ( 371) 00:26:23.205 10247.447 - 10307.025: 27.5689% ( 372) 00:26:23.205 10307.025 - 10366.604: 30.9499% ( 383) 00:26:23.205 10366.604 - 10426.182: 34.5604% ( 409) 00:26:23.205 10426.182 - 10485.760: 38.1974% ( 412) 00:26:23.205 10485.760 - 10545.338: 41.9403% ( 424) 00:26:23.205 10545.338 - 10604.916: 45.4979% ( 403) 00:26:23.205 10604.916 - 10664.495: 48.6670% ( 359) 00:26:23.205 10664.495 - 10724.073: 51.5360% ( 325) 00:26:23.205 10724.073 - 10783.651: 54.3344% ( 317) 00:26:23.205 10783.651 - 10843.229: 57.0798% ( 311) 00:26:23.205 10843.229 - 10902.807: 59.7987% ( 308) 00:26:23.205 10902.807 - 10962.385: 62.2970% ( 283) 00:26:23.205 10962.385 - 11021.964: 64.6010% ( 261) 00:26:23.205 11021.964 - 11081.542: 66.7903% ( 248) 00:26:23.205 11081.542 - 11141.120: 68.8824% ( 237) 00:26:23.205 11141.120 - 11200.698: 70.9393% ( 233) 00:26:23.205 11200.698 - 11260.276: 72.6607% ( 195) 00:26:23.205 11260.276 - 11319.855: 74.1084% ( 164) 00:26:23.205 11319.855 - 11379.433: 75.3001% ( 135) 00:26:23.205 11379.433 - 11439.011: 76.2712% ( 110) 00:26:23.205 11439.011 - 11498.589: 77.0745% ( 91) 00:26:23.205 11498.589 - 11558.167: 77.7278% ( 74) 00:26:23.205 11558.167 - 11617.745: 78.3722% ( 73) 00:26:23.205 11617.745 - 11677.324: 79.0607% ( 78) 00:26:23.205 11677.324 - 11736.902: 79.8376% ( 88) 00:26:23.205 11736.902 - 11796.480: 80.5261% ( 78) 00:26:23.205 11796.480 - 11856.058: 81.1529% ( 71) 00:26:23.205 11856.058 - 11915.636: 81.7973% ( 73) 00:26:23.205 11915.636 - 11975.215: 82.4153% ( 70) 00:26:23.205 11975.215 - 12034.793: 83.0420% ( 71) 00:26:23.205 12034.793 - 12094.371: 83.7394% ( 79) 00:26:23.205 12094.371 - 12153.949: 84.2602% ( 59) 00:26:23.205 12153.949 - 12213.527: 84.7105% ( 51) 00:26:23.205 12213.527 - 12273.105: 85.1871% ( 54) 00:26:23.205 12273.105 - 12332.684: 85.5403% ( 40) 00:26:23.205 12332.684 - 12392.262: 85.8669% ( 37) 00:26:23.205 12392.262 - 12451.840: 86.2906% ( 48) 00:26:23.205 12451.840 - 12511.418: 86.7232% ( 49) 00:26:23.205 12511.418 - 12570.996: 87.1734% ( 51) 00:26:23.205 12570.996 - 12630.575: 87.6059% ( 49) 00:26:23.205 12630.575 - 12690.153: 87.9944% ( 44) 00:26:23.205 12690.153 - 12749.731: 88.4269% ( 49) 00:26:23.205 12749.731 - 12809.309: 88.8595% ( 49) 00:26:23.205 12809.309 - 12868.887: 89.2214% ( 41) 00:26:23.205 12868.887 - 12928.465: 89.6098% ( 44) 00:26:23.205 12928.465 - 12988.044: 90.0865% ( 54) 00:26:23.205 12988.044 - 13047.622: 90.6338% ( 62) 00:26:23.205 13047.622 - 13107.200: 91.1194% ( 55) 00:26:23.205 13107.200 - 13166.778: 91.6225% ( 57) 00:26:23.205 13166.778 - 13226.356: 92.0727% ( 51) 00:26:23.205 13226.356 - 13285.935: 92.4523% ( 43) 00:26:23.205 13285.935 - 13345.513: 92.7790% ( 37) 00:26:23.205 13345.513 - 13405.091: 93.1409% ( 41) 00:26:23.205 13405.091 - 13464.669: 93.4322% ( 33) 00:26:23.205 13464.669 - 13524.247: 93.7412% ( 35) 00:26:23.205 13524.247 - 13583.825: 94.1384% ( 45) 00:26:23.205 13583.825 - 13643.404: 94.5180% ( 43) 00:26:23.205 13643.404 - 13702.982: 94.8270% ( 35) 00:26:23.205 13702.982 - 13762.560: 95.1095% ( 32) 00:26:23.205 13762.560 - 13822.138: 95.4449% ( 38) 00:26:23.205 13822.138 - 13881.716: 95.7097% ( 30) 00:26:23.205 13881.716 - 13941.295: 95.9746% ( 30) 00:26:23.205 13941.295 - 14000.873: 96.1864% ( 24) 00:26:23.205 14000.873 - 14060.451: 96.4071% ( 25) 00:26:23.205 14060.451 - 14120.029: 96.5572% ( 17) 00:26:23.205 14120.029 - 14179.607: 96.7073% ( 17) 00:26:23.205 14179.607 - 14239.185: 96.8132% ( 12) 00:26:23.205 14239.185 - 14298.764: 96.9898% ( 20) 00:26:23.205 14298.764 - 14358.342: 97.1928% ( 23) 00:26:23.205 14358.342 - 14417.920: 97.3429% ( 17) 00:26:23.205 14417.920 - 14477.498: 97.4929% ( 17) 00:26:23.205 14477.498 - 14537.076: 97.6254% ( 15) 00:26:23.205 14537.076 - 14596.655: 97.7489% ( 14) 00:26:23.205 14596.655 - 14656.233: 97.8372% ( 10) 00:26:23.205 14656.233 - 14715.811: 97.8990% ( 7) 00:26:23.205 14715.811 - 14775.389: 97.9520% ( 6) 00:26:23.205 14775.389 - 14834.967: 98.0049% ( 6) 00:26:23.205 14834.967 - 14894.545: 98.0579% ( 6) 00:26:23.205 14894.545 - 14954.124: 98.1109% ( 6) 00:26:23.205 14954.124 - 15013.702: 98.1374% ( 3) 00:26:23.205 15013.702 - 15073.280: 98.1638% ( 3) 00:26:23.205 15073.280 - 15132.858: 98.1903% ( 3) 00:26:23.205 15132.858 - 15192.436: 98.2168% ( 3) 00:26:23.205 15192.436 - 15252.015: 98.2698% ( 6) 00:26:23.205 15252.015 - 15371.171: 98.4110% ( 16) 00:26:23.205 15371.171 - 15490.327: 98.4728% ( 7) 00:26:23.205 15490.327 - 15609.484: 98.5434% ( 8) 00:26:23.205 15609.484 - 15728.640: 98.5876% ( 5) 00:26:23.205 15728.640 - 15847.796: 98.6141% ( 3) 00:26:23.205 15847.796 - 15966.953: 98.6494% ( 4) 00:26:23.205 15966.953 - 16086.109: 98.6847% ( 4) 00:26:23.205 16086.109 - 16205.265: 98.7288% ( 5) 00:26:23.205 16205.265 - 16324.422: 98.7818% ( 6) 00:26:23.205 16324.422 - 16443.578: 98.8259% ( 5) 00:26:23.205 16443.578 - 16562.735: 98.8701% ( 5) 00:26:23.205 25618.618 - 25737.775: 98.8965% ( 3) 00:26:23.205 25737.775 - 25856.931: 98.9319% ( 4) 00:26:23.205 25856.931 - 25976.087: 98.9583% ( 3) 00:26:23.205 25976.087 - 26095.244: 98.9848% ( 3) 00:26:23.205 26095.244 - 26214.400: 99.0113% ( 3) 00:26:23.205 26214.400 - 26333.556: 99.0378% ( 3) 00:26:23.205 26333.556 - 26452.713: 99.0554% ( 2) 00:26:23.205 26452.713 - 26571.869: 99.0907% ( 4) 00:26:23.205 26571.869 - 26691.025: 99.1172% ( 3) 00:26:23.205 26691.025 - 26810.182: 99.1437% ( 3) 00:26:23.205 26810.182 - 26929.338: 99.1790% ( 4) 00:26:23.205 26929.338 - 27048.495: 99.2055% ( 3) 00:26:23.205 27048.495 - 27167.651: 99.2232% ( 2) 00:26:23.205 27167.651 - 27286.807: 99.2408% ( 2) 00:26:23.205 27286.807 - 27405.964: 99.2673% ( 3) 00:26:23.205 27405.964 - 27525.120: 99.2850% ( 2) 00:26:23.205 27525.120 - 27644.276: 99.3114% ( 3) 00:26:23.205 27644.276 - 27763.433: 99.3203% ( 1) 00:26:23.205 27763.433 - 27882.589: 99.3468% ( 3) 00:26:23.205 27882.589 - 28001.745: 99.3821% ( 4) 00:26:23.205 28001.745 - 28120.902: 99.4085% ( 3) 00:26:23.205 28120.902 - 28240.058: 99.4350% ( 3) 00:26:23.205 34078.720 - 34317.033: 99.4703% ( 4) 00:26:23.205 34317.033 - 34555.345: 99.5321% ( 7) 00:26:23.205 34555.345 - 34793.658: 99.5851% ( 6) 00:26:23.205 34793.658 - 35031.971: 99.6292% ( 5) 00:26:23.205 35031.971 - 35270.284: 99.6822% ( 6) 00:26:23.205 35270.284 - 35508.596: 99.7440% ( 7) 00:26:23.205 35508.596 - 35746.909: 99.8058% ( 7) 00:26:23.205 35746.909 - 35985.222: 99.8588% ( 6) 00:26:23.205 35985.222 - 36223.535: 99.9029% ( 5) 00:26:23.205 36223.535 - 36461.847: 99.9382% ( 4) 00:26:23.205 36461.847 - 36700.160: 99.9912% ( 6) 00:26:23.205 36700.160 - 36938.473: 100.0000% ( 1) 00:26:23.205 00:26:23.205 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:26:23.205 ============================================================================== 00:26:23.205 Range in us Cumulative IO count 00:26:23.205 9234.618 - 9294.196: 0.0441% ( 5) 00:26:23.205 9294.196 - 9353.775: 0.1324% ( 10) 00:26:23.205 9353.775 - 9413.353: 0.1766% ( 5) 00:26:23.205 9413.353 - 9472.931: 0.2737% ( 11) 00:26:23.205 9472.931 - 9532.509: 0.5561% ( 32) 00:26:23.205 9532.509 - 9592.087: 1.1035% ( 62) 00:26:23.205 9592.087 - 9651.665: 1.9244% ( 93) 00:26:23.205 9651.665 - 9711.244: 3.1073% ( 134) 00:26:23.205 9711.244 - 9770.822: 4.7493% ( 186) 00:26:23.205 9770.822 - 9830.400: 6.4707% ( 195) 00:26:23.205 9830.400 - 9889.978: 8.2804% ( 205) 00:26:23.205 9889.978 - 9949.556: 10.2754% ( 226) 00:26:23.205 9949.556 - 10009.135: 12.4029% ( 241) 00:26:23.205 10009.135 - 10068.713: 14.9453% ( 288) 00:26:23.205 10068.713 - 10128.291: 17.2846% ( 265) 00:26:23.205 10128.291 - 10187.869: 20.1271% ( 322) 00:26:23.205 10187.869 - 10247.447: 23.5434% ( 387) 00:26:23.205 10247.447 - 10307.025: 27.1981% ( 414) 00:26:23.205 10307.025 - 10366.604: 31.0028% ( 431) 00:26:23.205 10366.604 - 10426.182: 34.6840% ( 417) 00:26:23.205 10426.182 - 10485.760: 38.6035% ( 444) 00:26:23.205 10485.760 - 10545.338: 42.1345% ( 400) 00:26:23.205 10545.338 - 10604.916: 45.4714% ( 378) 00:26:23.206 10604.916 - 10664.495: 48.4375% ( 336) 00:26:23.206 10664.495 - 10724.073: 51.4654% ( 343) 00:26:23.206 10724.073 - 10783.651: 54.2373% ( 314) 00:26:23.206 10783.651 - 10843.229: 56.8944% ( 301) 00:26:23.206 10843.229 - 10902.807: 59.5516% ( 301) 00:26:23.206 10902.807 - 10962.385: 62.0674% ( 285) 00:26:23.206 10962.385 - 11021.964: 64.4774% ( 273) 00:26:23.206 11021.964 - 11081.542: 66.6402% ( 245) 00:26:23.206 11081.542 - 11141.120: 68.6794% ( 231) 00:26:23.206 11141.120 - 11200.698: 70.5685% ( 214) 00:26:23.206 11200.698 - 11260.276: 72.3076% ( 197) 00:26:23.206 11260.276 - 11319.855: 73.7818% ( 167) 00:26:23.206 11319.855 - 11379.433: 74.9735% ( 135) 00:26:23.206 11379.433 - 11439.011: 75.9887% ( 115) 00:26:23.206 11439.011 - 11498.589: 76.8715% ( 100) 00:26:23.206 11498.589 - 11558.167: 77.8249% ( 108) 00:26:23.206 11558.167 - 11617.745: 78.7341% ( 103) 00:26:23.206 11617.745 - 11677.324: 79.5286% ( 90) 00:26:23.206 11677.324 - 11736.902: 80.2701% ( 84) 00:26:23.206 11736.902 - 11796.480: 80.9587% ( 78) 00:26:23.206 11796.480 - 11856.058: 81.6561% ( 79) 00:26:23.206 11856.058 - 11915.636: 82.3005% ( 73) 00:26:23.206 11915.636 - 11975.215: 82.8743% ( 65) 00:26:23.206 11975.215 - 12034.793: 83.4304% ( 63) 00:26:23.206 12034.793 - 12094.371: 84.0131% ( 66) 00:26:23.206 12094.371 - 12153.949: 84.5251% ( 58) 00:26:23.206 12153.949 - 12213.527: 85.0106% ( 55) 00:26:23.206 12213.527 - 12273.105: 85.4696% ( 52) 00:26:23.206 12273.105 - 12332.684: 85.8934% ( 48) 00:26:23.206 12332.684 - 12392.262: 86.2465% ( 40) 00:26:23.206 12392.262 - 12451.840: 86.6172% ( 42) 00:26:23.206 12451.840 - 12511.418: 87.0145% ( 45) 00:26:23.206 12511.418 - 12570.996: 87.3764% ( 41) 00:26:23.206 12570.996 - 12630.575: 87.7825% ( 46) 00:26:23.206 12630.575 - 12690.153: 88.2062% ( 48) 00:26:23.206 12690.153 - 12749.731: 88.6476% ( 50) 00:26:23.206 12749.731 - 12809.309: 89.0095% ( 41) 00:26:23.206 12809.309 - 12868.887: 89.3803% ( 42) 00:26:23.206 12868.887 - 12928.465: 89.7952% ( 47) 00:26:23.206 12928.465 - 12988.044: 90.2013% ( 46) 00:26:23.206 12988.044 - 13047.622: 90.6603% ( 52) 00:26:23.206 13047.622 - 13107.200: 91.0664% ( 46) 00:26:23.206 13107.200 - 13166.778: 91.4460% ( 43) 00:26:23.206 13166.778 - 13226.356: 91.9668% ( 59) 00:26:23.206 13226.356 - 13285.935: 92.4170% ( 51) 00:26:23.206 13285.935 - 13345.513: 92.8054% ( 44) 00:26:23.206 13345.513 - 13405.091: 93.1409% ( 38) 00:26:23.206 13405.091 - 13464.669: 93.4763% ( 38) 00:26:23.206 13464.669 - 13524.247: 93.7765% ( 34) 00:26:23.206 13524.247 - 13583.825: 94.0855% ( 35) 00:26:23.206 13583.825 - 13643.404: 94.3679% ( 32) 00:26:23.206 13643.404 - 13702.982: 94.6681% ( 34) 00:26:23.206 13702.982 - 13762.560: 94.9682% ( 34) 00:26:23.206 13762.560 - 13822.138: 95.2595% ( 33) 00:26:23.206 13822.138 - 13881.716: 95.5332% ( 31) 00:26:23.206 13881.716 - 13941.295: 95.8510% ( 36) 00:26:23.206 13941.295 - 14000.873: 96.1423% ( 33) 00:26:23.206 14000.873 - 14060.451: 96.3365% ( 22) 00:26:23.206 14060.451 - 14120.029: 96.5042% ( 19) 00:26:23.206 14120.029 - 14179.607: 96.6808% ( 20) 00:26:23.206 14179.607 - 14239.185: 96.8397% ( 18) 00:26:23.206 14239.185 - 14298.764: 97.0251% ( 21) 00:26:23.206 14298.764 - 14358.342: 97.2193% ( 22) 00:26:23.206 14358.342 - 14417.920: 97.3340% ( 13) 00:26:23.206 14417.920 - 14477.498: 97.4135% ( 9) 00:26:23.206 14477.498 - 14537.076: 97.4841% ( 8) 00:26:23.206 14537.076 - 14596.655: 97.5547% ( 8) 00:26:23.206 14596.655 - 14656.233: 97.6430% ( 10) 00:26:23.206 14656.233 - 14715.811: 97.7225% ( 9) 00:26:23.206 14715.811 - 14775.389: 97.8107% ( 10) 00:26:23.206 14775.389 - 14834.967: 97.8902% ( 9) 00:26:23.206 14834.967 - 14894.545: 97.9608% ( 8) 00:26:23.206 14894.545 - 14954.124: 98.0226% ( 7) 00:26:23.206 14954.124 - 15013.702: 98.0756% ( 6) 00:26:23.206 15013.702 - 15073.280: 98.1374% ( 7) 00:26:23.206 15073.280 - 15132.858: 98.1903% ( 6) 00:26:23.206 15132.858 - 15192.436: 98.2345% ( 5) 00:26:23.206 15192.436 - 15252.015: 98.2609% ( 3) 00:26:23.206 15252.015 - 15371.171: 98.3051% ( 5) 00:26:23.206 15371.171 - 15490.327: 98.3492% ( 5) 00:26:23.206 15490.327 - 15609.484: 98.4375% ( 10) 00:26:23.206 15609.484 - 15728.640: 98.5434% ( 12) 00:26:23.206 15728.640 - 15847.796: 98.6317% ( 10) 00:26:23.206 15847.796 - 15966.953: 98.6670% ( 4) 00:26:23.206 15966.953 - 16086.109: 98.7112% ( 5) 00:26:23.206 16086.109 - 16205.265: 98.7553% ( 5) 00:26:23.206 16205.265 - 16324.422: 98.7906% ( 4) 00:26:23.206 16324.422 - 16443.578: 98.8436% ( 6) 00:26:23.206 16443.578 - 16562.735: 98.8701% ( 3) 00:26:23.206 23116.335 - 23235.491: 98.8877% ( 2) 00:26:23.206 23235.491 - 23354.647: 98.9142% ( 3) 00:26:23.206 23354.647 - 23473.804: 98.9407% ( 3) 00:26:23.206 23473.804 - 23592.960: 98.9672% ( 3) 00:26:23.206 23592.960 - 23712.116: 99.0025% ( 4) 00:26:23.206 23712.116 - 23831.273: 99.0290% ( 3) 00:26:23.206 23831.273 - 23950.429: 99.0554% ( 3) 00:26:23.206 23950.429 - 24069.585: 99.0907% ( 4) 00:26:23.206 24069.585 - 24188.742: 99.1172% ( 3) 00:26:23.206 24188.742 - 24307.898: 99.1437% ( 3) 00:26:23.206 24307.898 - 24427.055: 99.1702% ( 3) 00:26:23.206 24427.055 - 24546.211: 99.1967% ( 3) 00:26:23.206 24546.211 - 24665.367: 99.2232% ( 3) 00:26:23.206 24665.367 - 24784.524: 99.2585% ( 4) 00:26:23.206 24784.524 - 24903.680: 99.2850% ( 3) 00:26:23.206 24903.680 - 25022.836: 99.3114% ( 3) 00:26:23.206 25022.836 - 25141.993: 99.3468% ( 4) 00:26:23.206 25141.993 - 25261.149: 99.3732% ( 3) 00:26:23.206 25261.149 - 25380.305: 99.3997% ( 3) 00:26:23.206 25380.305 - 25499.462: 99.4262% ( 3) 00:26:23.206 25499.462 - 25618.618: 99.4350% ( 1) 00:26:23.206 31218.967 - 31457.280: 99.4439% ( 1) 00:26:23.206 31457.280 - 31695.593: 99.4968% ( 6) 00:26:23.206 31695.593 - 31933.905: 99.5498% ( 6) 00:26:23.206 31933.905 - 32172.218: 99.5939% ( 5) 00:26:23.206 32172.218 - 32410.531: 99.6557% ( 7) 00:26:23.206 32410.531 - 32648.844: 99.6999% ( 5) 00:26:23.206 32648.844 - 32887.156: 99.7617% ( 7) 00:26:23.206 32887.156 - 33125.469: 99.8146% ( 6) 00:26:23.206 33125.469 - 33363.782: 99.8676% ( 6) 00:26:23.206 33363.782 - 33602.095: 99.9294% ( 7) 00:26:23.206 33602.095 - 33840.407: 99.9823% ( 6) 00:26:23.206 33840.407 - 34078.720: 100.0000% ( 2) 00:26:23.206 00:26:23.206 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:26:23.206 ============================================================================== 00:26:23.206 Range in us Cumulative IO count 00:26:23.206 9353.775 - 9413.353: 0.0265% ( 3) 00:26:23.206 9413.353 - 9472.931: 0.1589% ( 15) 00:26:23.206 9472.931 - 9532.509: 0.5032% ( 39) 00:26:23.206 9532.509 - 9592.087: 0.9534% ( 51) 00:26:23.206 9592.087 - 9651.665: 1.8715% ( 104) 00:26:23.206 9651.665 - 9711.244: 2.9838% ( 126) 00:26:23.206 9711.244 - 9770.822: 4.4315% ( 164) 00:26:23.206 9770.822 - 9830.400: 5.9057% ( 167) 00:26:23.206 9830.400 - 9889.978: 7.7948% ( 214) 00:26:23.206 9889.978 - 9949.556: 9.9929% ( 249) 00:26:23.206 9949.556 - 10009.135: 12.4117% ( 274) 00:26:23.206 10009.135 - 10068.713: 14.9276% ( 285) 00:26:23.206 10068.713 - 10128.291: 17.7260% ( 317) 00:26:23.206 10128.291 - 10187.869: 20.7451% ( 342) 00:26:23.206 10187.869 - 10247.447: 23.9495% ( 363) 00:26:23.206 10247.447 - 10307.025: 27.3129% ( 381) 00:26:23.206 10307.025 - 10366.604: 30.9145% ( 408) 00:26:23.206 10366.604 - 10426.182: 34.5604% ( 413) 00:26:23.206 10426.182 - 10485.760: 38.4534% ( 441) 00:26:23.206 10485.760 - 10545.338: 42.2493% ( 430) 00:26:23.206 10545.338 - 10604.916: 45.6656% ( 387) 00:26:23.206 10604.916 - 10664.495: 48.7818% ( 353) 00:26:23.206 10664.495 - 10724.073: 51.8008% ( 342) 00:26:23.206 10724.073 - 10783.651: 54.4403% ( 299) 00:26:23.206 10783.651 - 10843.229: 57.0533% ( 296) 00:26:23.206 10843.229 - 10902.807: 59.5427% ( 282) 00:26:23.206 10902.807 - 10962.385: 62.0763% ( 287) 00:26:23.206 10962.385 - 11021.964: 64.5922% ( 285) 00:26:23.206 11021.964 - 11081.542: 66.8520% ( 256) 00:26:23.206 11081.542 - 11141.120: 68.8294% ( 224) 00:26:23.206 11141.120 - 11200.698: 70.5508% ( 195) 00:26:23.206 11200.698 - 11260.276: 72.1398% ( 180) 00:26:23.206 11260.276 - 11319.855: 73.6494% ( 171) 00:26:23.206 11319.855 - 11379.433: 74.8941% ( 141) 00:26:23.206 11379.433 - 11439.011: 75.9446% ( 119) 00:26:23.206 11439.011 - 11498.589: 77.0480% ( 125) 00:26:23.206 11498.589 - 11558.167: 77.9043% ( 97) 00:26:23.206 11558.167 - 11617.745: 78.6988% ( 90) 00:26:23.206 11617.745 - 11677.324: 79.5551% ( 97) 00:26:23.206 11677.324 - 11736.902: 80.3054% ( 85) 00:26:23.206 11736.902 - 11796.480: 80.9763% ( 76) 00:26:23.207 11796.480 - 11856.058: 81.5766% ( 68) 00:26:23.207 11856.058 - 11915.636: 82.1239% ( 62) 00:26:23.207 11915.636 - 11975.215: 82.6359% ( 58) 00:26:23.207 11975.215 - 12034.793: 83.1480% ( 58) 00:26:23.207 12034.793 - 12094.371: 83.6776% ( 60) 00:26:23.207 12094.371 - 12153.949: 84.1896% ( 58) 00:26:23.207 12153.949 - 12213.527: 84.7281% ( 61) 00:26:23.207 12213.527 - 12273.105: 85.2578% ( 60) 00:26:23.207 12273.105 - 12332.684: 85.8492% ( 67) 00:26:23.207 12332.684 - 12392.262: 86.2994% ( 51) 00:26:23.207 12392.262 - 12451.840: 86.6614% ( 41) 00:26:23.207 12451.840 - 12511.418: 87.0056% ( 39) 00:26:23.207 12511.418 - 12570.996: 87.4294% ( 48) 00:26:23.207 12570.996 - 12630.575: 87.8531% ( 48) 00:26:23.207 12630.575 - 12690.153: 88.1709% ( 36) 00:26:23.207 12690.153 - 12749.731: 88.4622% ( 33) 00:26:23.207 12749.731 - 12809.309: 88.7270% ( 30) 00:26:23.207 12809.309 - 12868.887: 89.1419% ( 47) 00:26:23.207 12868.887 - 12928.465: 89.5127% ( 42) 00:26:23.207 12928.465 - 12988.044: 89.9188% ( 46) 00:26:23.207 12988.044 - 13047.622: 90.3513% ( 49) 00:26:23.207 13047.622 - 13107.200: 90.7927% ( 50) 00:26:23.207 13107.200 - 13166.778: 91.2694% ( 54) 00:26:23.207 13166.778 - 13226.356: 91.6931% ( 48) 00:26:23.207 13226.356 - 13285.935: 92.0992% ( 46) 00:26:23.207 13285.935 - 13345.513: 92.4788% ( 43) 00:26:23.207 13345.513 - 13405.091: 92.8231% ( 39) 00:26:23.207 13405.091 - 13464.669: 93.1850% ( 41) 00:26:23.207 13464.669 - 13524.247: 93.5205% ( 38) 00:26:23.207 13524.247 - 13583.825: 93.9972% ( 54) 00:26:23.207 13583.825 - 13643.404: 94.3591% ( 41) 00:26:23.207 13643.404 - 13702.982: 94.7210% ( 41) 00:26:23.207 13702.982 - 13762.560: 95.1006% ( 43) 00:26:23.207 13762.560 - 13822.138: 95.4626% ( 41) 00:26:23.207 13822.138 - 13881.716: 95.8863% ( 48) 00:26:23.207 13881.716 - 13941.295: 96.2129% ( 37) 00:26:23.207 13941.295 - 14000.873: 96.4424% ( 26) 00:26:23.207 14000.873 - 14060.451: 96.6278% ( 21) 00:26:23.207 14060.451 - 14120.029: 96.8044% ( 20) 00:26:23.207 14120.029 - 14179.607: 96.9368% ( 15) 00:26:23.207 14179.607 - 14239.185: 97.0516% ( 13) 00:26:23.207 14239.185 - 14298.764: 97.1487% ( 11) 00:26:23.207 14298.764 - 14358.342: 97.2369% ( 10) 00:26:23.207 14358.342 - 14417.920: 97.3782% ( 16) 00:26:23.207 14417.920 - 14477.498: 97.5194% ( 16) 00:26:23.207 14477.498 - 14537.076: 97.6165% ( 11) 00:26:23.207 14537.076 - 14596.655: 97.6871% ( 8) 00:26:23.207 14596.655 - 14656.233: 97.7489% ( 7) 00:26:23.207 14656.233 - 14715.811: 97.8284% ( 9) 00:26:23.207 14715.811 - 14775.389: 97.9078% ( 9) 00:26:23.207 14775.389 - 14834.967: 97.9873% ( 9) 00:26:23.207 14834.967 - 14894.545: 98.0667% ( 9) 00:26:23.207 14894.545 - 14954.124: 98.1020% ( 4) 00:26:23.207 14954.124 - 15013.702: 98.1285% ( 3) 00:26:23.207 15013.702 - 15073.280: 98.1638% ( 4) 00:26:23.207 15073.280 - 15132.858: 98.1903% ( 3) 00:26:23.207 15132.858 - 15192.436: 98.2168% ( 3) 00:26:23.207 15192.436 - 15252.015: 98.2521% ( 4) 00:26:23.207 15252.015 - 15371.171: 98.3404% ( 10) 00:26:23.207 15371.171 - 15490.327: 98.4287% ( 10) 00:26:23.207 15490.327 - 15609.484: 98.4728% ( 5) 00:26:23.207 15609.484 - 15728.640: 98.5258% ( 6) 00:26:23.207 15728.640 - 15847.796: 98.5699% ( 5) 00:26:23.207 15847.796 - 15966.953: 98.6229% ( 6) 00:26:23.207 15966.953 - 16086.109: 98.6758% ( 6) 00:26:23.207 16086.109 - 16205.265: 98.7200% ( 5) 00:26:23.207 16205.265 - 16324.422: 98.7730% ( 6) 00:26:23.207 16324.422 - 16443.578: 98.8259% ( 6) 00:26:23.207 16443.578 - 16562.735: 98.8701% ( 5) 00:26:23.207 20733.207 - 20852.364: 98.8877% ( 2) 00:26:23.207 20852.364 - 20971.520: 98.9142% ( 3) 00:26:23.207 20971.520 - 21090.676: 98.9407% ( 3) 00:26:23.207 21090.676 - 21209.833: 98.9672% ( 3) 00:26:23.207 21209.833 - 21328.989: 98.9936% ( 3) 00:26:23.207 21328.989 - 21448.145: 99.0201% ( 3) 00:26:23.207 21448.145 - 21567.302: 99.0554% ( 4) 00:26:23.207 21567.302 - 21686.458: 99.0819% ( 3) 00:26:23.207 21686.458 - 21805.615: 99.1084% ( 3) 00:26:23.207 21805.615 - 21924.771: 99.1349% ( 3) 00:26:23.207 21924.771 - 22043.927: 99.1702% ( 4) 00:26:23.207 22043.927 - 22163.084: 99.1967% ( 3) 00:26:23.207 22163.084 - 22282.240: 99.2320% ( 4) 00:26:23.207 22282.240 - 22401.396: 99.2585% ( 3) 00:26:23.207 22401.396 - 22520.553: 99.2761% ( 2) 00:26:23.207 22520.553 - 22639.709: 99.3114% ( 4) 00:26:23.207 22639.709 - 22758.865: 99.3379% ( 3) 00:26:23.207 22758.865 - 22878.022: 99.3644% ( 3) 00:26:23.207 22878.022 - 22997.178: 99.3909% ( 3) 00:26:23.207 22997.178 - 23116.335: 99.4174% ( 3) 00:26:23.207 23116.335 - 23235.491: 99.4350% ( 2) 00:26:23.207 28954.996 - 29074.153: 99.4615% ( 3) 00:26:23.207 29074.153 - 29193.309: 99.4880% ( 3) 00:26:23.207 29193.309 - 29312.465: 99.5233% ( 4) 00:26:23.207 29312.465 - 29431.622: 99.5498% ( 3) 00:26:23.207 29431.622 - 29550.778: 99.5851% ( 4) 00:26:23.207 29550.778 - 29669.935: 99.6116% ( 3) 00:26:23.207 29669.935 - 29789.091: 99.6381% ( 3) 00:26:23.207 29789.091 - 29908.247: 99.6645% ( 3) 00:26:23.207 29908.247 - 30027.404: 99.6999% ( 4) 00:26:23.207 30027.404 - 30146.560: 99.7263% ( 3) 00:26:23.207 30146.560 - 30265.716: 99.7617% ( 4) 00:26:23.207 30265.716 - 30384.873: 99.7881% ( 3) 00:26:23.207 30384.873 - 30504.029: 99.8146% ( 3) 00:26:23.207 30504.029 - 30742.342: 99.8764% ( 7) 00:26:23.207 30742.342 - 30980.655: 99.9294% ( 6) 00:26:23.207 30980.655 - 31218.967: 99.9823% ( 6) 00:26:23.207 31218.967 - 31457.280: 100.0000% ( 2) 00:26:23.207 00:26:23.207 06:52:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:26:23.207 00:26:23.207 real 0m2.716s 00:26:23.207 user 0m2.345s 00:26:23.207 sys 0m0.274s 00:26:23.207 06:52:55 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.207 ************************************ 00:26:23.207 06:52:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:26:23.207 END TEST nvme_perf 00:26:23.207 ************************************ 00:26:23.207 06:52:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:23.207 06:52:55 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:23.207 06:52:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.207 06:52:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:23.207 ************************************ 00:26:23.207 START TEST nvme_hello_world 00:26:23.207 ************************************ 00:26:23.207 06:52:55 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:23.466 Initializing NVMe Controllers 00:26:23.466 Attached to 0000:00:10.0 00:26:23.466 Namespace ID: 1 size: 6GB 00:26:23.466 Attached to 0000:00:11.0 00:26:23.466 Namespace ID: 1 size: 5GB 00:26:23.466 Attached to 0000:00:13.0 00:26:23.466 Namespace ID: 1 size: 1GB 00:26:23.466 Attached to 0000:00:12.0 00:26:23.466 Namespace ID: 1 size: 4GB 00:26:23.466 Namespace ID: 2 size: 4GB 00:26:23.466 Namespace ID: 3 size: 4GB 00:26:23.466 Initialization complete. 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 INFO: using host memory buffer for IO 00:26:23.466 Hello world! 00:26:23.466 00:26:23.466 real 0m0.348s 00:26:23.466 user 0m0.146s 00:26:23.466 sys 0m0.145s 00:26:23.466 06:52:56 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.466 06:52:56 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:23.466 ************************************ 00:26:23.466 END TEST nvme_hello_world 00:26:23.466 ************************************ 00:26:23.466 06:52:56 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:23.466 06:52:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:23.466 06:52:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.466 06:52:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:23.466 ************************************ 00:26:23.466 START TEST nvme_sgl 00:26:23.466 ************************************ 00:26:23.466 06:52:56 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:24.034 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:26:24.034 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:26:24.034 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:26:24.034 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:26:24.034 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:26:24.034 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:26:24.034 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:26:24.034 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:26:24.034 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:26:24.034 NVMe Readv/Writev Request test 00:26:24.034 Attached to 0000:00:10.0 00:26:24.034 Attached to 0000:00:11.0 00:26:24.034 Attached to 0000:00:13.0 00:26:24.034 Attached to 0000:00:12.0 00:26:24.034 0000:00:10.0: build_io_request_2 test passed 00:26:24.034 0000:00:10.0: build_io_request_4 test passed 00:26:24.034 0000:00:10.0: build_io_request_5 test passed 00:26:24.034 0000:00:10.0: build_io_request_6 test passed 00:26:24.034 0000:00:10.0: build_io_request_7 test passed 00:26:24.034 0000:00:10.0: build_io_request_10 test passed 00:26:24.034 0000:00:11.0: build_io_request_2 test passed 00:26:24.034 0000:00:11.0: build_io_request_4 test passed 00:26:24.034 0000:00:11.0: build_io_request_5 test passed 00:26:24.034 0000:00:11.0: build_io_request_6 test passed 00:26:24.034 0000:00:11.0: build_io_request_7 test passed 00:26:24.034 0000:00:11.0: build_io_request_10 test passed 00:26:24.034 Cleaning up... 00:26:24.034 00:26:24.034 real 0m0.422s 00:26:24.034 user 0m0.219s 00:26:24.034 sys 0m0.142s 00:26:24.034 06:52:56 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.034 06:52:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:26:24.034 ************************************ 00:26:24.034 END TEST nvme_sgl 00:26:24.034 ************************************ 00:26:24.034 06:52:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:24.034 06:52:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.034 06:52:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.034 06:52:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:24.034 ************************************ 00:26:24.034 START TEST nvme_e2edp 00:26:24.034 ************************************ 00:26:24.034 06:52:56 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:24.293 NVMe Write/Read with End-to-End data protection test 00:26:24.293 Attached to 0000:00:10.0 00:26:24.293 Attached to 0000:00:11.0 00:26:24.293 Attached to 0000:00:13.0 00:26:24.293 Attached to 0000:00:12.0 00:26:24.293 Cleaning up... 00:26:24.293 00:26:24.293 real 0m0.318s 00:26:24.293 user 0m0.125s 00:26:24.293 sys 0m0.146s 00:26:24.293 06:52:56 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.293 06:52:56 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:26:24.293 ************************************ 00:26:24.293 END TEST nvme_e2edp 00:26:24.293 ************************************ 00:26:24.293 06:52:56 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:24.293 06:52:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.293 06:52:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.293 06:52:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:24.293 ************************************ 00:26:24.293 START TEST nvme_reserve 00:26:24.293 ************************************ 00:26:24.293 06:52:56 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:24.861 ===================================================== 00:26:24.861 NVMe Controller at PCI bus 0, device 16, function 0 00:26:24.861 ===================================================== 00:26:24.861 Reservations: Not Supported 00:26:24.861 ===================================================== 00:26:24.861 NVMe Controller at PCI bus 0, device 17, function 0 00:26:24.861 ===================================================== 00:26:24.861 Reservations: Not Supported 00:26:24.861 ===================================================== 00:26:24.861 NVMe Controller at PCI bus 0, device 19, function 0 00:26:24.861 ===================================================== 00:26:24.861 Reservations: Not Supported 00:26:24.861 ===================================================== 00:26:24.861 NVMe Controller at PCI bus 0, device 18, function 0 00:26:24.861 ===================================================== 00:26:24.861 Reservations: Not Supported 00:26:24.861 Reservation test passed 00:26:24.861 00:26:24.861 real 0m0.340s 00:26:24.861 user 0m0.141s 00:26:24.861 sys 0m0.145s 00:26:24.861 06:52:57 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.861 06:52:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:26:24.861 ************************************ 00:26:24.861 END TEST nvme_reserve 00:26:24.861 ************************************ 00:26:24.861 06:52:57 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:24.861 06:52:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.861 06:52:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.861 06:52:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:24.861 ************************************ 00:26:24.861 START TEST nvme_err_injection 00:26:24.861 ************************************ 00:26:24.861 06:52:57 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:25.120 NVMe Error Injection test 00:26:25.120 Attached to 0000:00:10.0 00:26:25.120 Attached to 0000:00:11.0 00:26:25.120 Attached to 0000:00:13.0 00:26:25.121 Attached to 0000:00:12.0 00:26:25.121 0000:00:13.0: get features failed as expected 00:26:25.121 0000:00:12.0: get features failed as expected 00:26:25.121 0000:00:10.0: get features failed as expected 00:26:25.121 0000:00:11.0: get features failed as expected 00:26:25.121 0000:00:10.0: get features successfully as expected 00:26:25.121 0000:00:11.0: get features successfully as expected 00:26:25.121 0000:00:13.0: get features successfully as expected 00:26:25.121 0000:00:12.0: get features successfully as expected 00:26:25.121 0000:00:10.0: read failed as expected 00:26:25.121 0000:00:11.0: read failed as expected 00:26:25.121 0000:00:13.0: read failed as expected 00:26:25.121 0000:00:12.0: read failed as expected 00:26:25.121 0000:00:10.0: read successfully as expected 00:26:25.121 0000:00:11.0: read successfully as expected 00:26:25.121 0000:00:13.0: read successfully as expected 00:26:25.121 0000:00:12.0: read successfully as expected 00:26:25.121 Cleaning up... 00:26:25.121 00:26:25.121 real 0m0.337s 00:26:25.121 user 0m0.136s 00:26:25.121 sys 0m0.154s 00:26:25.121 06:52:57 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.121 06:52:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:26:25.121 ************************************ 00:26:25.121 END TEST nvme_err_injection 00:26:25.121 ************************************ 00:26:25.121 06:52:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:25.121 06:52:57 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:26:25.121 06:52:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.121 06:52:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.121 ************************************ 00:26:25.121 START TEST nvme_overhead 00:26:25.121 ************************************ 00:26:25.121 06:52:57 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:26.524 Initializing NVMe Controllers 00:26:26.524 Attached to 0000:00:10.0 00:26:26.524 Attached to 0000:00:11.0 00:26:26.524 Attached to 0000:00:13.0 00:26:26.524 Attached to 0000:00:12.0 00:26:26.524 Initialization complete. Launching workers. 00:26:26.524 submit (in ns) avg, min, max = 15870.0, 13454.1, 52823.2 00:26:26.524 complete (in ns) avg, min, max = 10317.9, 9471.4, 90095.0 00:26:26.524 00:26:26.524 Submit histogram 00:26:26.524 ================ 00:26:26.524 Range in us Cumulative Count 00:26:26.524 13.440 - 13.498: 0.0086% ( 1) 00:26:26.524 14.778 - 14.836: 0.2158% ( 24) 00:26:26.524 14.836 - 14.895: 1.0098% ( 92) 00:26:26.524 14.895 - 15.011: 8.3981% ( 856) 00:26:26.524 15.011 - 15.127: 29.6996% ( 2468) 00:26:26.524 15.127 - 15.244: 55.5239% ( 2992) 00:26:26.524 15.244 - 15.360: 69.6876% ( 1641) 00:26:26.524 15.360 - 15.476: 75.5653% ( 681) 00:26:26.524 15.476 - 15.593: 78.8883% ( 385) 00:26:26.524 15.593 - 15.709: 81.1928% ( 267) 00:26:26.524 15.709 - 15.825: 83.1952% ( 232) 00:26:26.524 15.825 - 15.942: 84.9646% ( 205) 00:26:26.524 15.942 - 16.058: 86.6736% ( 198) 00:26:26.524 16.058 - 16.175: 87.9769% ( 151) 00:26:26.524 16.175 - 16.291: 88.9953% ( 118) 00:26:26.524 16.291 - 16.407: 89.6858% ( 80) 00:26:26.524 16.407 - 16.524: 90.2468% ( 65) 00:26:26.524 16.524 - 16.640: 90.6266% ( 44) 00:26:26.524 16.640 - 16.756: 91.0409% ( 48) 00:26:26.524 16.756 - 16.873: 91.4466% ( 47) 00:26:26.524 16.873 - 16.989: 91.7659% ( 37) 00:26:26.524 16.989 - 17.105: 92.0335% ( 31) 00:26:26.524 17.105 - 17.222: 92.3442% ( 36) 00:26:26.524 17.222 - 17.338: 92.6549% ( 36) 00:26:26.524 17.338 - 17.455: 92.8534% ( 23) 00:26:26.524 17.455 - 17.571: 92.9570% ( 12) 00:26:26.524 17.571 - 17.687: 93.0606% ( 12) 00:26:26.524 17.687 - 17.804: 93.1814% ( 14) 00:26:26.524 17.804 - 17.920: 93.2418% ( 7) 00:26:26.524 17.920 - 18.036: 93.3109% ( 8) 00:26:26.524 18.036 - 18.153: 93.3627% ( 6) 00:26:26.524 18.153 - 18.269: 93.4145% ( 6) 00:26:26.524 18.269 - 18.385: 93.4490% ( 4) 00:26:26.524 18.385 - 18.502: 93.4749% ( 3) 00:26:26.524 18.502 - 18.618: 93.5180% ( 5) 00:26:26.524 18.618 - 18.735: 93.5439% ( 3) 00:26:26.524 18.735 - 18.851: 93.5957% ( 6) 00:26:26.524 18.851 - 18.967: 93.6130% ( 2) 00:26:26.524 18.967 - 19.084: 93.6216% ( 1) 00:26:26.524 19.084 - 19.200: 93.6648% ( 5) 00:26:26.524 19.200 - 19.316: 93.7079% ( 5) 00:26:26.524 19.316 - 19.433: 93.7511% ( 5) 00:26:26.524 19.433 - 19.549: 93.7770% ( 3) 00:26:26.524 19.549 - 19.665: 93.8374% ( 7) 00:26:26.524 19.665 - 19.782: 93.8547% ( 2) 00:26:26.524 19.782 - 19.898: 93.9064% ( 6) 00:26:26.524 19.898 - 20.015: 93.9582% ( 6) 00:26:26.524 20.015 - 20.131: 93.9841% ( 3) 00:26:26.524 20.131 - 20.247: 94.0445% ( 7) 00:26:26.524 20.247 - 20.364: 94.0963% ( 6) 00:26:26.524 20.364 - 20.480: 94.1567% ( 7) 00:26:26.524 20.480 - 20.596: 94.2172% ( 7) 00:26:26.524 20.596 - 20.713: 94.2776% ( 7) 00:26:26.524 20.713 - 20.829: 94.3294% ( 6) 00:26:26.524 20.829 - 20.945: 94.3898% ( 7) 00:26:26.524 20.945 - 21.062: 94.5106% ( 14) 00:26:26.524 21.062 - 21.178: 94.5451% ( 4) 00:26:26.524 21.178 - 21.295: 94.6142% ( 8) 00:26:26.524 21.295 - 21.411: 94.6919% ( 9) 00:26:26.524 21.411 - 21.527: 94.7264% ( 4) 00:26:26.524 21.527 - 21.644: 94.8300% ( 12) 00:26:26.524 21.644 - 21.760: 94.9767% ( 17) 00:26:26.524 21.760 - 21.876: 95.1148% ( 16) 00:26:26.524 21.876 - 21.993: 95.3565% ( 28) 00:26:26.524 21.993 - 22.109: 95.6154% ( 30) 00:26:26.524 22.109 - 22.225: 95.9347% ( 37) 00:26:26.524 22.225 - 22.342: 96.2023% ( 31) 00:26:26.524 22.342 - 22.458: 96.3490% ( 17) 00:26:26.524 22.458 - 22.575: 96.5735% ( 26) 00:26:26.524 22.575 - 22.691: 96.7115% ( 16) 00:26:26.524 22.691 - 22.807: 96.9187% ( 24) 00:26:26.524 22.807 - 22.924: 97.0223% ( 12) 00:26:26.524 22.924 - 23.040: 97.1431% ( 14) 00:26:26.524 23.040 - 23.156: 97.1949% ( 6) 00:26:26.524 23.156 - 23.273: 97.3157% ( 14) 00:26:26.524 23.273 - 23.389: 97.3675% ( 6) 00:26:26.524 23.389 - 23.505: 97.4279% ( 7) 00:26:26.524 23.505 - 23.622: 97.5056% ( 9) 00:26:26.524 23.622 - 23.738: 97.6092% ( 12) 00:26:26.524 23.738 - 23.855: 97.6610% ( 6) 00:26:26.524 23.855 - 23.971: 97.7645% ( 12) 00:26:26.524 23.971 - 24.087: 97.8422% ( 9) 00:26:26.524 24.087 - 24.204: 97.9285% ( 10) 00:26:26.524 24.204 - 24.320: 97.9717% ( 5) 00:26:26.524 24.320 - 24.436: 98.0321% ( 7) 00:26:26.524 24.436 - 24.553: 98.1443% ( 13) 00:26:26.524 24.553 - 24.669: 98.2220% ( 9) 00:26:26.524 24.669 - 24.785: 98.2565% ( 4) 00:26:26.524 24.785 - 24.902: 98.3774% ( 14) 00:26:26.524 24.902 - 25.018: 98.5241% ( 17) 00:26:26.524 25.018 - 25.135: 98.6277% ( 12) 00:26:26.524 25.135 - 25.251: 98.7226% ( 11) 00:26:26.524 25.251 - 25.367: 98.8003% ( 9) 00:26:26.524 25.367 - 25.484: 98.8780% ( 9) 00:26:26.524 25.484 - 25.600: 98.9470% ( 8) 00:26:26.524 25.600 - 25.716: 99.0333% ( 10) 00:26:26.524 25.716 - 25.833: 99.0678% ( 4) 00:26:26.524 25.833 - 25.949: 99.1024% ( 4) 00:26:26.524 25.949 - 26.065: 99.1542% ( 6) 00:26:26.524 26.065 - 26.182: 99.2318% ( 9) 00:26:26.524 26.182 - 26.298: 99.2664% ( 4) 00:26:26.524 26.298 - 26.415: 99.3268% ( 7) 00:26:26.524 26.415 - 26.531: 99.3786% ( 6) 00:26:26.524 26.531 - 26.647: 99.4303% ( 6) 00:26:26.524 26.647 - 26.764: 99.4649% ( 4) 00:26:26.524 26.764 - 26.880: 99.4821% ( 2) 00:26:26.524 26.880 - 26.996: 99.5167% ( 4) 00:26:26.524 26.996 - 27.113: 99.5512% ( 4) 00:26:26.524 27.113 - 27.229: 99.5771% ( 3) 00:26:26.524 27.229 - 27.345: 99.6030% ( 3) 00:26:26.524 27.345 - 27.462: 99.6116% ( 1) 00:26:26.524 27.462 - 27.578: 99.6289% ( 2) 00:26:26.524 27.578 - 27.695: 99.6461% ( 2) 00:26:26.524 27.811 - 27.927: 99.6806% ( 4) 00:26:26.524 27.927 - 28.044: 99.7152% ( 4) 00:26:26.524 28.044 - 28.160: 99.7411% ( 3) 00:26:26.524 28.160 - 28.276: 99.7497% ( 1) 00:26:26.524 28.276 - 28.393: 99.7929% ( 5) 00:26:26.524 28.742 - 28.858: 99.8015% ( 1) 00:26:26.524 28.975 - 29.091: 99.8101% ( 1) 00:26:26.524 29.789 - 30.022: 99.8187% ( 1) 00:26:26.524 30.022 - 30.255: 99.8274% ( 1) 00:26:26.524 30.255 - 30.487: 99.8446% ( 2) 00:26:26.524 30.720 - 30.953: 99.8533% ( 1) 00:26:26.524 30.953 - 31.185: 99.8619% ( 1) 00:26:26.524 31.185 - 31.418: 99.8705% ( 1) 00:26:26.524 32.349 - 32.582: 99.8878% ( 2) 00:26:26.524 32.582 - 32.815: 99.8964% ( 1) 00:26:26.524 33.280 - 33.513: 99.9137% ( 2) 00:26:26.524 33.745 - 33.978: 99.9223% ( 1) 00:26:26.524 34.211 - 34.444: 99.9310% ( 1) 00:26:26.524 35.375 - 35.607: 99.9396% ( 1) 00:26:26.524 35.607 - 35.840: 99.9482% ( 1) 00:26:26.524 37.935 - 38.167: 99.9568% ( 1) 00:26:26.524 40.727 - 40.960: 99.9655% ( 1) 00:26:26.524 43.985 - 44.218: 99.9741% ( 1) 00:26:26.524 44.916 - 45.149: 99.9827% ( 1) 00:26:26.524 47.942 - 48.175: 99.9914% ( 1) 00:26:26.524 52.596 - 52.829: 100.0000% ( 1) 00:26:26.524 00:26:26.524 Complete histogram 00:26:26.524 ================== 00:26:26.524 Range in us Cumulative Count 00:26:26.524 9.425 - 9.484: 0.0173% ( 2) 00:26:26.524 9.484 - 9.542: 0.4920% ( 55) 00:26:26.524 9.542 - 9.600: 4.2810% ( 439) 00:26:26.524 9.600 - 9.658: 16.0452% ( 1363) 00:26:26.524 9.658 - 9.716: 34.0497% ( 2086) 00:26:26.524 9.716 - 9.775: 52.1233% ( 2094) 00:26:26.524 9.775 - 9.833: 66.0625% ( 1615) 00:26:26.524 9.833 - 9.891: 75.4704% ( 1090) 00:26:26.524 9.891 - 9.949: 80.3038% ( 560) 00:26:26.524 9.949 - 10.007: 82.7033% ( 278) 00:26:26.524 10.007 - 10.065: 83.8857% ( 137) 00:26:26.524 10.065 - 10.124: 84.3086% ( 49) 00:26:26.524 10.124 - 10.182: 84.6107% ( 35) 00:26:26.524 10.182 - 10.240: 84.8006% ( 22) 00:26:26.524 10.240 - 10.298: 84.9128% ( 13) 00:26:26.524 10.298 - 10.356: 84.9646% ( 6) 00:26:26.524 10.356 - 10.415: 84.9991% ( 4) 00:26:26.524 10.415 - 10.473: 85.0854% ( 10) 00:26:26.524 10.473 - 10.531: 85.1890% ( 12) 00:26:26.525 10.531 - 10.589: 85.3358% ( 17) 00:26:26.525 10.589 - 10.647: 85.5774% ( 28) 00:26:26.525 10.647 - 10.705: 85.7673% ( 22) 00:26:26.525 10.705 - 10.764: 86.2420% ( 55) 00:26:26.525 10.764 - 10.822: 86.7858% ( 63) 00:26:26.525 10.822 - 10.880: 87.5798% ( 92) 00:26:26.525 10.880 - 10.938: 88.2531% ( 78) 00:26:26.525 10.938 - 10.996: 89.1507% ( 104) 00:26:26.525 10.996 - 11.055: 89.6858% ( 62) 00:26:26.525 11.055 - 11.113: 90.2641% ( 67) 00:26:26.525 11.113 - 11.171: 90.6611% ( 46) 00:26:26.525 11.171 - 11.229: 90.9805% ( 37) 00:26:26.525 11.229 - 11.287: 91.3257% ( 40) 00:26:26.525 11.287 - 11.345: 91.7314% ( 47) 00:26:26.525 11.345 - 11.404: 92.0249% ( 34) 00:26:26.525 11.404 - 11.462: 92.3356% ( 36) 00:26:26.525 11.462 - 11.520: 92.5341% ( 23) 00:26:26.525 11.520 - 11.578: 92.8189% ( 33) 00:26:26.525 11.578 - 11.636: 93.0261% ( 24) 00:26:26.525 11.636 - 11.695: 93.1901% ( 19) 00:26:26.525 11.695 - 11.753: 93.3454% ( 18) 00:26:26.525 11.753 - 11.811: 93.4404% ( 11) 00:26:26.525 11.811 - 11.869: 93.5180% ( 9) 00:26:26.525 11.869 - 11.927: 93.5785% ( 7) 00:26:26.525 11.927 - 11.985: 93.6044% ( 3) 00:26:26.525 11.985 - 12.044: 93.6389% ( 4) 00:26:26.525 12.044 - 12.102: 93.6820% ( 5) 00:26:26.525 12.102 - 12.160: 93.6993% ( 2) 00:26:26.525 12.160 - 12.218: 93.7511% ( 6) 00:26:26.525 12.218 - 12.276: 93.7683% ( 2) 00:26:26.525 12.276 - 12.335: 93.8460% ( 9) 00:26:26.525 12.335 - 12.393: 93.9323% ( 10) 00:26:26.525 12.393 - 12.451: 93.9927% ( 7) 00:26:26.525 12.451 - 12.509: 94.0532% ( 7) 00:26:26.525 12.509 - 12.567: 94.1308% ( 9) 00:26:26.525 12.567 - 12.625: 94.1826% ( 6) 00:26:26.525 12.625 - 12.684: 94.2603% ( 9) 00:26:26.525 12.684 - 12.742: 94.3466% ( 10) 00:26:26.525 12.742 - 12.800: 94.3725% ( 3) 00:26:26.525 12.800 - 12.858: 94.4157% ( 5) 00:26:26.525 12.858 - 12.916: 94.4588% ( 5) 00:26:26.525 12.916 - 12.975: 94.4675% ( 1) 00:26:26.525 12.975 - 13.033: 94.5451% ( 9) 00:26:26.525 13.033 - 13.091: 94.5710% ( 3) 00:26:26.525 13.091 - 13.149: 94.6315% ( 7) 00:26:26.525 13.149 - 13.207: 94.6487% ( 2) 00:26:26.525 13.207 - 13.265: 94.6919% ( 5) 00:26:26.525 13.265 - 13.324: 94.7437% ( 6) 00:26:26.525 13.324 - 13.382: 94.7695% ( 3) 00:26:26.525 13.382 - 13.440: 94.8041% ( 4) 00:26:26.525 13.440 - 13.498: 94.8213% ( 2) 00:26:26.525 13.498 - 13.556: 94.8559% ( 4) 00:26:26.525 13.556 - 13.615: 94.9163% ( 7) 00:26:26.525 13.615 - 13.673: 95.0026% ( 10) 00:26:26.525 13.673 - 13.731: 95.0630% ( 7) 00:26:26.525 13.731 - 13.789: 95.0975% ( 4) 00:26:26.525 13.789 - 13.847: 95.1752% ( 9) 00:26:26.525 13.847 - 13.905: 95.2443% ( 8) 00:26:26.525 13.905 - 13.964: 95.3219% ( 9) 00:26:26.525 13.964 - 14.022: 95.3478% ( 3) 00:26:26.525 14.022 - 14.080: 95.4083% ( 7) 00:26:26.525 14.080 - 14.138: 95.4255% ( 2) 00:26:26.525 14.138 - 14.196: 95.5118% ( 10) 00:26:26.525 14.196 - 14.255: 95.5377% ( 3) 00:26:26.525 14.255 - 14.313: 95.6068% ( 8) 00:26:26.525 14.313 - 14.371: 95.6586% ( 6) 00:26:26.525 14.371 - 14.429: 95.7017% ( 5) 00:26:26.525 14.429 - 14.487: 95.7880% ( 10) 00:26:26.525 14.487 - 14.545: 95.8139% ( 3) 00:26:26.525 14.545 - 14.604: 95.8916% ( 9) 00:26:26.525 14.604 - 14.662: 95.9089% ( 2) 00:26:26.525 14.662 - 14.720: 95.9779% ( 8) 00:26:26.525 14.720 - 14.778: 96.0211% ( 5) 00:26:26.525 14.778 - 14.836: 96.0815% ( 7) 00:26:26.525 14.836 - 14.895: 96.1333% ( 6) 00:26:26.525 14.895 - 15.011: 96.2196% ( 10) 00:26:26.525 15.011 - 15.127: 96.3145% ( 11) 00:26:26.525 15.127 - 15.244: 96.4095% ( 11) 00:26:26.525 15.244 - 15.360: 96.4785% ( 8) 00:26:26.525 15.360 - 15.476: 96.5735% ( 11) 00:26:26.525 15.476 - 15.593: 96.6511% ( 9) 00:26:26.525 15.593 - 15.709: 96.7633% ( 13) 00:26:26.525 15.709 - 15.825: 96.8065% ( 5) 00:26:26.525 15.825 - 15.942: 96.9187% ( 13) 00:26:26.525 15.942 - 16.058: 96.9791% ( 7) 00:26:26.525 16.058 - 16.175: 97.0395% ( 7) 00:26:26.525 16.175 - 16.291: 97.0999% ( 7) 00:26:26.525 16.291 - 16.407: 97.1949% ( 11) 00:26:26.525 16.407 - 16.524: 97.2812% ( 10) 00:26:26.525 16.524 - 16.640: 97.3589% ( 9) 00:26:26.525 16.640 - 16.756: 97.4625% ( 12) 00:26:26.525 16.756 - 16.873: 97.5919% ( 15) 00:26:26.525 16.873 - 16.989: 97.7041% ( 13) 00:26:26.525 16.989 - 17.105: 97.7387% ( 4) 00:26:26.525 17.105 - 17.222: 97.7473% ( 1) 00:26:26.525 17.222 - 17.338: 97.7991% ( 6) 00:26:26.525 17.338 - 17.455: 97.8250% ( 3) 00:26:26.525 17.455 - 17.571: 97.9199% ( 11) 00:26:26.525 17.571 - 17.687: 97.9544% ( 4) 00:26:26.525 17.687 - 17.804: 98.0062% ( 6) 00:26:26.525 17.804 - 17.920: 98.0839% ( 9) 00:26:26.525 17.920 - 18.036: 98.1357% ( 6) 00:26:26.525 18.036 - 18.153: 98.1875% ( 6) 00:26:26.525 18.153 - 18.269: 98.2651% ( 9) 00:26:26.525 18.269 - 18.385: 98.3601% ( 11) 00:26:26.525 18.385 - 18.502: 98.4205% ( 7) 00:26:26.525 18.502 - 18.618: 98.4982% ( 9) 00:26:26.525 18.618 - 18.735: 98.5500% ( 6) 00:26:26.525 18.735 - 18.851: 98.6190% ( 8) 00:26:26.525 18.851 - 18.967: 98.6881% ( 8) 00:26:26.525 18.967 - 19.084: 98.7226% ( 4) 00:26:26.525 19.084 - 19.200: 98.7658% ( 5) 00:26:26.525 19.200 - 19.316: 98.8348% ( 8) 00:26:26.525 19.316 - 19.433: 98.9038% ( 8) 00:26:26.525 19.433 - 19.549: 98.9815% ( 9) 00:26:26.525 19.549 - 19.665: 99.0161% ( 4) 00:26:26.525 19.665 - 19.782: 99.0765% ( 7) 00:26:26.525 19.782 - 19.898: 99.1196% ( 5) 00:26:26.525 19.898 - 20.015: 99.1714% ( 6) 00:26:26.525 20.015 - 20.131: 99.2059% ( 4) 00:26:26.525 20.131 - 20.247: 99.2405% ( 4) 00:26:26.525 20.247 - 20.364: 99.2664% ( 3) 00:26:26.525 20.364 - 20.480: 99.2750% ( 1) 00:26:26.525 20.480 - 20.596: 99.3009% ( 3) 00:26:26.525 20.596 - 20.713: 99.3181% ( 2) 00:26:26.525 20.713 - 20.829: 99.3699% ( 6) 00:26:26.525 20.829 - 20.945: 99.4217% ( 6) 00:26:26.525 20.945 - 21.062: 99.4908% ( 8) 00:26:26.525 21.062 - 21.178: 99.5339% ( 5) 00:26:26.525 21.178 - 21.295: 99.5598% ( 3) 00:26:26.525 21.295 - 21.411: 99.5771% ( 2) 00:26:26.525 21.411 - 21.527: 99.6202% ( 5) 00:26:26.525 21.527 - 21.644: 99.6720% ( 6) 00:26:26.525 21.644 - 21.760: 99.6893% ( 2) 00:26:26.525 21.760 - 21.876: 99.7324% ( 5) 00:26:26.525 21.876 - 21.993: 99.7583% ( 3) 00:26:26.525 21.993 - 22.109: 99.7756% ( 2) 00:26:26.525 22.109 - 22.225: 99.7842% ( 1) 00:26:26.525 22.225 - 22.342: 99.7929% ( 1) 00:26:26.525 22.807 - 22.924: 99.8015% ( 1) 00:26:26.525 23.273 - 23.389: 99.8101% ( 1) 00:26:26.525 23.855 - 23.971: 99.8187% ( 1) 00:26:26.525 24.087 - 24.204: 99.8274% ( 1) 00:26:26.525 24.553 - 24.669: 99.8360% ( 1) 00:26:26.525 24.669 - 24.785: 99.8446% ( 1) 00:26:26.525 24.902 - 25.018: 99.8619% ( 2) 00:26:26.525 26.647 - 26.764: 99.8705% ( 1) 00:26:26.525 26.996 - 27.113: 99.8792% ( 1) 00:26:26.525 27.811 - 27.927: 99.8878% ( 1) 00:26:26.525 28.742 - 28.858: 99.8964% ( 1) 00:26:26.525 31.418 - 31.651: 99.9051% ( 1) 00:26:26.525 32.582 - 32.815: 99.9223% ( 2) 00:26:26.525 35.375 - 35.607: 99.9310% ( 1) 00:26:26.525 38.167 - 38.400: 99.9396% ( 1) 00:26:26.525 39.564 - 39.796: 99.9568% ( 2) 00:26:26.525 43.753 - 43.985: 99.9655% ( 1) 00:26:26.525 45.847 - 46.080: 99.9741% ( 1) 00:26:26.525 70.284 - 70.749: 99.9827% ( 1) 00:26:26.525 85.644 - 86.109: 99.9914% ( 1) 00:26:26.525 89.833 - 90.298: 100.0000% ( 1) 00:26:26.525 00:26:26.525 00:26:26.525 real 0m1.336s 00:26:26.525 user 0m1.124s 00:26:26.525 sys 0m0.155s 00:26:26.525 06:52:58 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.525 06:52:58 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:26:26.525 ************************************ 00:26:26.525 END TEST nvme_overhead 00:26:26.525 ************************************ 00:26:26.525 06:52:59 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:26.525 06:52:59 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:26:26.525 06:52:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.525 06:52:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:26.525 ************************************ 00:26:26.525 START TEST nvme_arbitration 00:26:26.525 ************************************ 00:26:26.525 06:52:59 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:30.720 Initializing NVMe Controllers 00:26:30.720 Attached to 0000:00:10.0 00:26:30.720 Attached to 0000:00:11.0 00:26:30.720 Attached to 0000:00:13.0 00:26:30.720 Attached to 0000:00:12.0 00:26:30.720 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:26:30.720 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:26:30.720 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:26:30.720 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:26:30.720 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:26:30.720 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:26:30.720 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:26:30.720 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:26:30.720 Initialization complete. Launching workers. 00:26:30.720 Starting thread on core 1 with urgent priority queue 00:26:30.720 Starting thread on core 2 with urgent priority queue 00:26:30.720 Starting thread on core 3 with urgent priority queue 00:26:30.720 Starting thread on core 0 with urgent priority queue 00:26:30.720 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:26:30.720 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:26:30.720 QEMU NVMe Ctrl (12341 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:26:30.720 QEMU NVMe Ctrl (12342 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:26:30.720 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:26:30.720 QEMU NVMe Ctrl (12342 ) core 3: 704.00 IO/s 142.05 secs/100000 ios 00:26:30.720 ======================================================== 00:26:30.720 00:26:30.720 00:26:30.720 real 0m3.404s 00:26:30.720 user 0m9.330s 00:26:30.720 sys 0m0.152s 00:26:30.720 06:53:02 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.720 06:53:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:26:30.720 ************************************ 00:26:30.720 END TEST nvme_arbitration 00:26:30.720 ************************************ 00:26:30.720 06:53:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:26:30.720 06:53:02 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:30.720 06:53:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.720 06:53:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:30.720 ************************************ 00:26:30.720 START TEST nvme_single_aen 00:26:30.720 ************************************ 00:26:30.720 06:53:02 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:26:30.720 Asynchronous Event Request test 00:26:30.720 Attached to 0000:00:10.0 00:26:30.720 Attached to 0000:00:11.0 00:26:30.720 Attached to 0000:00:13.0 00:26:30.720 Attached to 0000:00:12.0 00:26:30.720 Reset controller to setup AER completions for this process 00:26:30.720 Registering asynchronous event callbacks... 00:26:30.720 Getting orig temperature thresholds of all controllers 00:26:30.720 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:30.720 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:30.720 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:30.720 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:30.720 Setting all controllers temperature threshold low to trigger AER 00:26:30.720 Waiting for all controllers temperature threshold to be set lower 00:26:30.720 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:30.720 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:26:30.720 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:30.720 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:26:30.720 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:30.720 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:26:30.720 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:30.720 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:26:30.720 Waiting for all controllers to trigger AER and reset threshold 00:26:30.720 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:30.720 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:30.720 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:30.720 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:30.720 Cleaning up... 00:26:30.720 00:26:30.720 real 0m0.296s 00:26:30.720 user 0m0.111s 00:26:30.720 sys 0m0.131s 00:26:30.720 06:53:02 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.720 06:53:02 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:26:30.720 ************************************ 00:26:30.720 END TEST nvme_single_aen 00:26:30.720 ************************************ 00:26:30.720 06:53:02 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:26:30.720 06:53:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:30.721 06:53:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.721 06:53:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:30.721 ************************************ 00:26:30.721 START TEST nvme_doorbell_aers 00:26:30.721 ************************************ 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:26:30.721 06:53:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:30.721 [2024-12-06 06:53:03.182691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:26:40.782 Executing: test_write_invalid_db 00:26:40.782 Waiting for AER completion... 00:26:40.782 Failure: test_write_invalid_db 00:26:40.782 00:26:40.782 Executing: test_invalid_db_write_overflow_sq 00:26:40.782 Waiting for AER completion... 00:26:40.782 Failure: test_invalid_db_write_overflow_sq 00:26:40.782 00:26:40.782 Executing: test_invalid_db_write_overflow_cq 00:26:40.782 Waiting for AER completion... 00:26:40.782 Failure: test_invalid_db_write_overflow_cq 00:26:40.782 00:26:40.782 06:53:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:26:40.782 06:53:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:26:40.782 [2024-12-06 06:53:13.241574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:26:50.753 Executing: test_write_invalid_db 00:26:50.753 Waiting for AER completion... 00:26:50.753 Failure: test_write_invalid_db 00:26:50.753 00:26:50.753 Executing: test_invalid_db_write_overflow_sq 00:26:50.753 Waiting for AER completion... 00:26:50.753 Failure: test_invalid_db_write_overflow_sq 00:26:50.753 00:26:50.753 Executing: test_invalid_db_write_overflow_cq 00:26:50.753 Waiting for AER completion... 00:26:50.753 Failure: test_invalid_db_write_overflow_cq 00:26:50.753 00:26:50.753 06:53:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:26:50.753 06:53:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:26:50.753 [2024-12-06 06:53:23.293860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:00.733 Executing: test_write_invalid_db 00:27:00.733 Waiting for AER completion... 00:27:00.733 Failure: test_write_invalid_db 00:27:00.733 00:27:00.733 Executing: test_invalid_db_write_overflow_sq 00:27:00.733 Waiting for AER completion... 00:27:00.733 Failure: test_invalid_db_write_overflow_sq 00:27:00.733 00:27:00.733 Executing: test_invalid_db_write_overflow_cq 00:27:00.733 Waiting for AER completion... 00:27:00.733 Failure: test_invalid_db_write_overflow_cq 00:27:00.733 00:27:00.733 06:53:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:27:00.733 06:53:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:27:00.733 [2024-12-06 06:53:33.294860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.706 Executing: test_write_invalid_db 00:27:10.706 Waiting for AER completion... 00:27:10.706 Failure: test_write_invalid_db 00:27:10.706 00:27:10.706 Executing: test_invalid_db_write_overflow_sq 00:27:10.706 Waiting for AER completion... 00:27:10.706 Failure: test_invalid_db_write_overflow_sq 00:27:10.706 00:27:10.706 Executing: test_invalid_db_write_overflow_cq 00:27:10.706 Waiting for AER completion... 00:27:10.706 Failure: test_invalid_db_write_overflow_cq 00:27:10.706 00:27:10.706 00:27:10.706 real 0m40.255s 00:27:10.706 user 0m34.189s 00:27:10.706 sys 0m5.680s 00:27:10.706 06:53:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.706 06:53:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:27:10.706 ************************************ 00:27:10.706 END TEST nvme_doorbell_aers 00:27:10.706 ************************************ 00:27:10.706 06:53:43 nvme -- nvme/nvme.sh@97 -- # uname 00:27:10.706 06:53:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:27:10.706 06:53:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:27:10.706 06:53:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:10.706 06:53:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.706 06:53:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:10.706 ************************************ 00:27:10.706 START TEST nvme_multi_aen 00:27:10.706 ************************************ 00:27:10.706 06:53:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:27:10.965 [2024-12-06 06:53:43.364243] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.364355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.364379] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.366137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.366197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.366217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.367644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.367695] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.367729] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.369174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.369221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 [2024-12-06 06:53:43.369239] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64536) is not found. Dropping the request. 00:27:10.965 Child process pid: 65051 00:27:11.223 [Child] Asynchronous Event Request test 00:27:11.223 [Child] Attached to 0000:00:10.0 00:27:11.223 [Child] Attached to 0000:00:11.0 00:27:11.223 [Child] Attached to 0000:00:13.0 00:27:11.223 [Child] Attached to 0000:00:12.0 00:27:11.223 [Child] Registering asynchronous event callbacks... 00:27:11.223 [Child] Getting orig temperature thresholds of all controllers 00:27:11.223 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 [Child] Waiting for all controllers to trigger AER and reset threshold 00:27:11.223 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.223 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.223 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.223 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.223 [Child] Cleaning up... 00:27:11.223 Asynchronous Event Request test 00:27:11.223 Attached to 0000:00:10.0 00:27:11.223 Attached to 0000:00:11.0 00:27:11.223 Attached to 0000:00:13.0 00:27:11.223 Attached to 0000:00:12.0 00:27:11.223 Reset controller to setup AER completions for this process 00:27:11.223 Registering asynchronous event callbacks... 00:27:11.223 Getting orig temperature thresholds of all controllers 00:27:11.223 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:11.223 Setting all controllers temperature threshold low to trigger AER 00:27:11.223 Waiting for all controllers temperature threshold to be set lower 00:27:11.223 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:27:11.223 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.223 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:27:11.224 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.224 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:27:11.224 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:11.224 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:27:11.224 Waiting for all controllers to trigger AER and reset threshold 00:27:11.224 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.224 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.224 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.224 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:11.224 Cleaning up... 00:27:11.224 00:27:11.224 real 0m0.595s 00:27:11.224 user 0m0.242s 00:27:11.224 sys 0m0.240s 00:27:11.224 06:53:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.224 06:53:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:27:11.224 ************************************ 00:27:11.224 END TEST nvme_multi_aen 00:27:11.224 ************************************ 00:27:11.224 06:53:43 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:11.224 06:53:43 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:11.224 06:53:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.224 06:53:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:11.224 ************************************ 00:27:11.224 START TEST nvme_startup 00:27:11.224 ************************************ 00:27:11.224 06:53:43 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:11.791 Initializing NVMe Controllers 00:27:11.791 Attached to 0000:00:10.0 00:27:11.791 Attached to 0000:00:11.0 00:27:11.791 Attached to 0000:00:13.0 00:27:11.791 Attached to 0000:00:12.0 00:27:11.791 Initialization complete. 00:27:11.791 Time used:230590.438 (us). 00:27:11.791 00:27:11.791 real 0m0.324s 00:27:11.791 user 0m0.127s 00:27:11.791 sys 0m0.147s 00:27:11.791 06:53:44 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.791 ************************************ 00:27:11.791 END TEST nvme_startup 00:27:11.791 06:53:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:27:11.791 ************************************ 00:27:11.791 06:53:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:27:11.791 06:53:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:11.791 06:53:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.791 06:53:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:11.791 ************************************ 00:27:11.791 START TEST nvme_multi_secondary 00:27:11.791 ************************************ 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65102 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65103 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:27:11.791 06:53:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:27:15.072 Initializing NVMe Controllers 00:27:15.072 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:15.072 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:15.072 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:15.072 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:15.072 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:27:15.072 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:27:15.072 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:27:15.072 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:27:15.072 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:27:15.072 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:27:15.072 Initialization complete. Launching workers. 00:27:15.072 ======================================================== 00:27:15.072 Latency(us) 00:27:15.072 Device Information : IOPS MiB/s Average min max 00:27:15.072 PCIE (0000:00:10.0) NSID 1 from core 1: 5897.84 23.04 2711.07 1103.25 6776.50 00:27:15.072 PCIE (0000:00:11.0) NSID 1 from core 1: 5897.84 23.04 2712.35 1147.20 7624.56 00:27:15.072 PCIE (0000:00:13.0) NSID 1 from core 1: 5897.84 23.04 2712.29 1132.39 7881.34 00:27:15.072 PCIE (0000:00:12.0) NSID 1 from core 1: 5897.84 23.04 2712.38 1117.31 8657.55 00:27:15.072 PCIE (0000:00:12.0) NSID 2 from core 1: 5897.84 23.04 2712.31 1140.51 8380.65 00:27:15.072 PCIE (0000:00:12.0) NSID 3 from core 1: 5897.84 23.04 2712.26 1129.85 8664.21 00:27:15.072 ======================================================== 00:27:15.072 Total : 35387.05 138.23 2712.11 1103.25 8664.21 00:27:15.072 00:27:15.332 Initializing NVMe Controllers 00:27:15.332 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:15.332 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:15.332 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:15.332 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:15.332 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:27:15.332 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:27:15.332 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:27:15.332 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:27:15.332 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:27:15.332 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:27:15.332 Initialization complete. Launching workers. 00:27:15.332 ======================================================== 00:27:15.332 Latency(us) 00:27:15.332 Device Information : IOPS MiB/s Average min max 00:27:15.332 PCIE (0000:00:10.0) NSID 1 from core 2: 2554.60 9.98 6259.84 1637.37 15056.89 00:27:15.332 PCIE (0000:00:11.0) NSID 1 from core 2: 2554.60 9.98 6262.83 1581.71 15001.54 00:27:15.332 PCIE (0000:00:13.0) NSID 1 from core 2: 2554.60 9.98 6262.91 1662.70 15821.97 00:27:15.332 PCIE (0000:00:12.0) NSID 1 from core 2: 2554.60 9.98 6263.17 1564.82 16995.62 00:27:15.332 PCIE (0000:00:12.0) NSID 2 from core 2: 2554.60 9.98 6263.17 1556.56 16164.01 00:27:15.332 PCIE (0000:00:12.0) NSID 3 from core 2: 2554.60 9.98 6263.55 1598.44 15399.45 00:27:15.332 ======================================================== 00:27:15.332 Total : 15327.58 59.87 6262.58 1556.56 16995.62 00:27:15.332 00:27:15.332 06:53:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65102 00:27:17.225 Initializing NVMe Controllers 00:27:17.225 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:17.225 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:17.225 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:17.225 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:17.225 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:17.225 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:27:17.225 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:27:17.225 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:27:17.225 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:27:17.225 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:27:17.225 Initialization complete. Launching workers. 00:27:17.225 ======================================================== 00:27:17.225 Latency(us) 00:27:17.225 Device Information : IOPS MiB/s Average min max 00:27:17.225 PCIE (0000:00:10.0) NSID 1 from core 0: 8556.02 33.42 1868.46 908.34 5461.05 00:27:17.225 PCIE (0000:00:11.0) NSID 1 from core 0: 8556.02 33.42 1869.51 956.42 5413.25 00:27:17.225 PCIE (0000:00:13.0) NSID 1 from core 0: 8556.02 33.42 1869.47 890.99 5679.32 00:27:17.225 PCIE (0000:00:12.0) NSID 1 from core 0: 8556.02 33.42 1869.43 837.95 5991.67 00:27:17.225 PCIE (0000:00:12.0) NSID 2 from core 0: 8556.02 33.42 1869.40 775.11 5709.52 00:27:17.225 PCIE (0000:00:12.0) NSID 3 from core 0: 8556.02 33.42 1869.35 704.84 5434.72 00:27:17.225 ======================================================== 00:27:17.225 Total : 51336.13 200.53 1869.27 704.84 5991.67 00:27:17.225 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65103 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65178 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65179 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:27:17.225 06:53:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:27:20.504 Initializing NVMe Controllers 00:27:20.504 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:20.504 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:20.504 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:20.504 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:20.504 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:27:20.504 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:27:20.504 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:27:20.504 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:27:20.504 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:27:20.504 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:27:20.504 Initialization complete. Launching workers. 00:27:20.504 ======================================================== 00:27:20.504 Latency(us) 00:27:20.504 Device Information : IOPS MiB/s Average min max 00:27:20.504 PCIE (0000:00:10.0) NSID 1 from core 1: 5561.24 21.72 2875.31 945.89 6475.07 00:27:20.504 PCIE (0000:00:11.0) NSID 1 from core 1: 5561.24 21.72 2877.13 972.99 6360.59 00:27:20.504 PCIE (0000:00:13.0) NSID 1 from core 1: 5561.24 21.72 2877.23 971.53 6206.56 00:27:20.504 PCIE (0000:00:12.0) NSID 1 from core 1: 5561.24 21.72 2877.74 938.73 6522.77 00:27:20.504 PCIE (0000:00:12.0) NSID 2 from core 1: 5561.24 21.72 2878.17 955.28 6816.88 00:27:20.504 PCIE (0000:00:12.0) NSID 3 from core 1: 5561.24 21.72 2878.48 941.26 6601.93 00:27:20.504 ======================================================== 00:27:20.504 Total : 33367.41 130.34 2877.34 938.73 6816.88 00:27:20.504 00:27:20.764 Initializing NVMe Controllers 00:27:20.764 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:20.764 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:20.764 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:20.764 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:20.764 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:20.764 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:27:20.764 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:27:20.764 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:27:20.764 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:27:20.764 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:27:20.764 Initialization complete. Launching workers. 00:27:20.764 ======================================================== 00:27:20.764 Latency(us) 00:27:20.764 Device Information : IOPS MiB/s Average min max 00:27:20.764 PCIE (0000:00:10.0) NSID 1 from core 0: 5310.15 20.74 3011.10 1073.00 7569.81 00:27:20.764 PCIE (0000:00:11.0) NSID 1 from core 0: 5310.15 20.74 3012.25 1117.41 7286.54 00:27:20.764 PCIE (0000:00:13.0) NSID 1 from core 0: 5310.15 20.74 3012.08 1020.51 7165.59 00:27:20.764 PCIE (0000:00:12.0) NSID 1 from core 0: 5310.15 20.74 3011.84 985.67 6999.31 00:27:20.764 PCIE (0000:00:12.0) NSID 2 from core 0: 5310.15 20.74 3011.63 921.78 6871.25 00:27:20.764 PCIE (0000:00:12.0) NSID 3 from core 0: 5310.15 20.74 3011.43 874.58 7531.75 00:27:20.764 ======================================================== 00:27:20.764 Total : 31860.91 124.46 3011.72 874.58 7569.81 00:27:20.764 00:27:22.724 Initializing NVMe Controllers 00:27:22.724 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:22.724 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:22.724 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:22.724 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:22.724 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:27:22.724 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:27:22.724 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:27:22.724 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:27:22.724 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:27:22.724 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:27:22.724 Initialization complete. Launching workers. 00:27:22.724 ======================================================== 00:27:22.724 Latency(us) 00:27:22.724 Device Information : IOPS MiB/s Average min max 00:27:22.724 PCIE (0000:00:10.0) NSID 1 from core 2: 3485.37 13.61 4587.69 1027.68 14408.29 00:27:22.724 PCIE (0000:00:11.0) NSID 1 from core 2: 3485.37 13.61 4589.23 1049.50 13580.27 00:27:22.724 PCIE (0000:00:13.0) NSID 1 from core 2: 3485.37 13.61 4589.84 1063.23 14937.56 00:27:22.724 PCIE (0000:00:12.0) NSID 1 from core 2: 3485.37 13.61 4588.61 1072.99 14731.16 00:27:22.724 PCIE (0000:00:12.0) NSID 2 from core 2: 3488.57 13.63 4581.37 1058.44 14921.61 00:27:22.724 PCIE (0000:00:12.0) NSID 3 from core 2: 3488.57 13.63 4581.93 889.41 14399.27 00:27:22.724 ======================================================== 00:27:22.724 Total : 20918.63 81.71 4586.44 889.41 14937.56 00:27:22.724 00:27:22.724 06:53:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65178 00:27:22.724 06:53:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65179 00:27:22.724 00:27:22.724 real 0m10.872s 00:27:22.724 user 0m18.683s 00:27:22.724 sys 0m1.018s 00:27:22.724 06:53:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.724 06:53:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:27:22.724 ************************************ 00:27:22.724 END TEST nvme_multi_secondary 00:27:22.724 ************************************ 00:27:22.724 06:53:55 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:27:22.724 06:53:55 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64110 ]] 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1094 -- # kill 64110 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1095 -- # wait 64110 00:27:22.724 [2024-12-06 06:53:55.055049] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.055130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.055171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.055194] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.057530] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.057594] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.057617] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.057639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.060202] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.060261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.060283] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.060302] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.062556] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.062616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.062638] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 [2024-12-06 06:53:55.062658] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65049) is not found. Dropping the request. 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:27:22.724 06:53:55 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.724 06:53:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:22.724 ************************************ 00:27:22.724 START TEST bdev_nvme_reset_stuck_adm_cmd 00:27:22.724 ************************************ 00:27:22.724 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:22.982 * Looking for test storage... 00:27:22.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:22.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.982 --rc genhtml_branch_coverage=1 00:27:22.982 --rc genhtml_function_coverage=1 00:27:22.982 --rc genhtml_legend=1 00:27:22.982 --rc geninfo_all_blocks=1 00:27:22.982 --rc geninfo_unexecuted_blocks=1 00:27:22.982 00:27:22.982 ' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:22.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.982 --rc genhtml_branch_coverage=1 00:27:22.982 --rc genhtml_function_coverage=1 00:27:22.982 --rc genhtml_legend=1 00:27:22.982 --rc geninfo_all_blocks=1 00:27:22.982 --rc geninfo_unexecuted_blocks=1 00:27:22.982 00:27:22.982 ' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:22.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.982 --rc genhtml_branch_coverage=1 00:27:22.982 --rc genhtml_function_coverage=1 00:27:22.982 --rc genhtml_legend=1 00:27:22.982 --rc geninfo_all_blocks=1 00:27:22.982 --rc geninfo_unexecuted_blocks=1 00:27:22.982 00:27:22.982 ' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:22.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.982 --rc genhtml_branch_coverage=1 00:27:22.982 --rc genhtml_function_coverage=1 00:27:22.982 --rc genhtml_legend=1 00:27:22.982 --rc geninfo_all_blocks=1 00:27:22.982 --rc geninfo_unexecuted_blocks=1 00:27:22.982 00:27:22.982 ' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:27:22.982 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65342 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65342 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65342 ']' 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.983 06:53:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:23.240 [2024-12-06 06:53:55.661629] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:27:23.240 [2024-12-06 06:53:55.661837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65342 ] 00:27:23.499 [2024-12-06 06:53:55.869014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.499 [2024-12-06 06:53:55.998802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.499 [2024-12-06 06:53:55.998941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.499 [2024-12-06 06:53:55.999045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.499 [2024-12-06 06:53:55.999359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:24.432 nvme0n1 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:27:24.432 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7HE0J.txt 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:24.433 true 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733468036 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65370 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:27:24.433 06:53:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:26.334 [2024-12-06 06:53:58.889098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:27:26.334 [2024-12-06 06:53:58.889564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.334 [2024-12-06 06:53:58.889618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:26.334 [2024-12-06 06:53:58.889640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.334 [2024-12-06 06:53:58.891733] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.334 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65370 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65370 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65370 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.334 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7HE0J.txt 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:27:26.592 06:53:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7HE0J.txt 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65342 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65342 ']' 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65342 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65342 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65342' 00:27:26.593 killing process with pid 65342 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65342 00:27:26.593 06:53:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65342 00:27:28.554 06:54:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:27:28.554 06:54:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:27:28.554 00:27:28.554 real 0m5.869s 00:27:28.554 user 0m20.637s 00:27:28.554 sys 0m0.626s 00:27:28.554 06:54:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.554 ************************************ 00:27:28.554 END TEST bdev_nvme_reset_stuck_adm_cmd 00:27:28.554 ************************************ 00:27:28.554 06:54:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:27:28.812 06:54:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:27:28.812 06:54:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:27:28.812 06:54:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:28.812 06:54:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.812 06:54:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:28.812 ************************************ 00:27:28.812 START TEST nvme_fio 00:27:28.812 ************************************ 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:28.812 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:28.812 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:29.071 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:29.071 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:29.329 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:27:29.329 06:54:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:29.329 06:54:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:27:29.588 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:29.588 fio-3.35 00:27:29.588 Starting 1 thread 00:27:32.874 00:27:32.874 test: (groupid=0, jobs=1): err= 0: pid=65519: Fri Dec 6 06:54:05 2024 00:27:32.874 read: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(111MiB/2001msec) 00:27:32.874 slat (usec): min=4, max=151, avg= 7.03, stdev= 2.30 00:27:32.874 clat (usec): min=263, max=8550, avg=4489.30, stdev=662.96 00:27:32.874 lat (usec): min=270, max=8564, avg=4496.33, stdev=663.70 00:27:32.874 clat percentiles (usec): 00:27:32.874 | 1.00th=[ 3458], 5.00th=[ 3720], 10.00th=[ 3982], 20.00th=[ 4146], 00:27:32.874 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:27:32.874 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5866], 00:27:32.874 | 99.00th=[ 7177], 99.50th=[ 7635], 99.90th=[ 8356], 99.95th=[ 8455], 00:27:32.874 | 99.99th=[ 8455] 00:27:32.874 bw ( KiB/s): min=54200, max=57792, per=97.91%, avg=55568.00, stdev=1942.98, samples=3 00:27:32.874 iops : min=13550, max=14448, avg=13892.00, stdev=485.74, samples=3 00:27:32.874 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(111MiB/2001msec); 0 zone resets 00:27:32.874 slat (nsec): min=4514, max=50715, avg=7160.58, stdev=2232.35 00:27:32.874 clat (usec): min=230, max=8703, avg=4498.06, stdev=667.21 00:27:32.874 lat (usec): min=235, max=8710, avg=4505.22, stdev=667.99 00:27:32.874 clat percentiles (usec): 00:27:32.874 | 1.00th=[ 3458], 5.00th=[ 3752], 10.00th=[ 3982], 20.00th=[ 4178], 00:27:32.874 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:27:32.874 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5866], 00:27:32.874 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[ 8455], 99.95th=[ 8455], 00:27:32.874 | 99.99th=[ 8586] 00:27:32.874 bw ( KiB/s): min=54320, max=57968, per=97.93%, avg=55592.00, stdev=2059.39, samples=3 00:27:32.874 iops : min=13580, max=14492, avg=13898.00, stdev=514.85, samples=3 00:27:32.874 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:27:32.874 lat (msec) : 2=0.07%, 4=10.76%, 10=89.12% 00:27:32.874 cpu : usr=98.85%, sys=0.10%, ctx=5, majf=0, minf=608 00:27:32.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:32.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:32.874 issued rwts: total=28392,28399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:32.874 00:27:32.874 Run status group 0 (all jobs): 00:27:32.874 READ: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=111MiB (116MB), run=2001-2001msec 00:27:32.874 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=111MiB (116MB), run=2001-2001msec 00:27:32.874 ----------------------------------------------------- 00:27:32.874 Suppressions used: 00:27:32.874 count bytes template 00:27:32.874 1 32 /usr/src/fio/parse.c 00:27:32.874 1 8 libtcmalloc_minimal.so 00:27:32.874 ----------------------------------------------------- 00:27:32.874 00:27:32.874 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:32.874 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:32.874 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:32.874 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:27:33.132 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:27:33.132 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:33.389 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:27:33.389 06:54:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:33.389 06:54:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:27:33.648 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:33.648 fio-3.35 00:27:33.648 Starting 1 thread 00:27:36.928 00:27:36.928 test: (groupid=0, jobs=1): err= 0: pid=65585: Fri Dec 6 06:54:09 2024 00:27:36.928 read: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(127MiB/2001msec) 00:27:36.928 slat (nsec): min=4536, max=71802, avg=6069.81, stdev=1975.13 00:27:36.928 clat (usec): min=224, max=8380, avg=3911.94, stdev=549.24 00:27:36.928 lat (usec): min=229, max=8401, avg=3918.01, stdev=549.94 00:27:36.928 clat percentiles (usec): 00:27:36.928 | 1.00th=[ 2933], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:27:36.928 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:36.928 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4686], 00:27:36.928 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 7898], 99.95th=[ 8094], 00:27:36.928 | 99.99th=[ 8225] 00:27:36.928 bw ( KiB/s): min=62184, max=69936, per=100.00%, avg=65138.67, stdev=4191.65, samples=3 00:27:36.928 iops : min=15548, max=17482, avg=16284.67, stdev=1046.06, samples=3 00:27:36.928 write: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(127MiB/2001msec); 0 zone resets 00:27:36.928 slat (usec): min=4, max=104, avg= 6.17, stdev= 2.06 00:27:36.928 clat (usec): min=245, max=8291, avg=3917.16, stdev=548.05 00:27:36.928 lat (usec): min=251, max=8297, avg=3923.33, stdev=548.77 00:27:36.928 clat percentiles (usec): 00:27:36.928 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:27:36.928 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:36.928 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4686], 00:27:36.928 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 7832], 99.95th=[ 8029], 00:27:36.928 | 99.99th=[ 8160] 00:27:36.928 bw ( KiB/s): min=62560, max=69512, per=99.53%, avg=64917.33, stdev=3979.55, samples=3 00:27:36.928 iops : min=15640, max=17378, avg=16229.33, stdev=994.89, samples=3 00:27:36.928 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:36.928 lat (msec) : 2=0.09%, 4=72.90%, 10=26.98% 00:27:36.928 cpu : usr=98.95%, sys=0.00%, ctx=3, majf=0, minf=609 00:27:36.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:36.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:36.928 issued rwts: total=32550,32627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:36.928 00:27:36.928 Run status group 0 (all jobs): 00:27:36.928 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:27:36.928 WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=127MiB (134MB), run=2001-2001msec 00:27:36.928 ----------------------------------------------------- 00:27:36.928 Suppressions used: 00:27:36.928 count bytes template 00:27:36.928 1 32 /usr/src/fio/parse.c 00:27:36.928 1 8 libtcmalloc_minimal.so 00:27:36.928 ----------------------------------------------------- 00:27:36.928 00:27:36.928 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:36.928 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:36.928 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:27:36.928 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:37.186 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:27:37.186 06:54:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:37.752 06:54:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:27:37.752 06:54:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:37.752 06:54:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:27:37.752 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:37.752 fio-3.35 00:27:37.752 Starting 1 thread 00:27:41.030 00:27:41.030 test: (groupid=0, jobs=1): err= 0: pid=65646: Fri Dec 6 06:54:13 2024 00:27:41.030 read: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec) 00:27:41.030 slat (nsec): min=4508, max=64030, avg=6154.61, stdev=1997.09 00:27:41.030 clat (usec): min=224, max=7817, avg=3982.52, stdev=573.60 00:27:41.030 lat (usec): min=229, max=7822, avg=3988.68, stdev=574.35 00:27:41.030 clat percentiles (usec): 00:27:41.030 | 1.00th=[ 2671], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:27:41.030 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3949], 00:27:41.030 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5014], 00:27:41.030 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7439], 99.95th=[ 7504], 00:27:41.030 | 99.99th=[ 7570] 00:27:41.030 bw ( KiB/s): min=61640, max=66936, per=99.36%, avg=63637.33, stdev=2877.85, samples=3 00:27:41.030 iops : min=15410, max=16734, avg=15909.33, stdev=719.46, samples=3 00:27:41.030 write: IOPS=16.0k, BW=62.7MiB/s (65.7MB/s)(125MiB/2001msec); 0 zone resets 00:27:41.030 slat (nsec): min=4652, max=36687, avg=6236.40, stdev=1928.84 00:27:41.030 clat (usec): min=248, max=7593, avg=3974.19, stdev=567.46 00:27:41.030 lat (usec): min=254, max=7629, avg=3980.43, stdev=568.16 00:27:41.030 clat percentiles (usec): 00:27:41.030 | 1.00th=[ 2606], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:27:41.030 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3916], 00:27:41.030 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4948], 00:27:41.030 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 7439], 99.95th=[ 7504], 00:27:41.030 | 99.99th=[ 7570] 00:27:41.030 bw ( KiB/s): min=61040, max=66208, per=98.68%, avg=63330.67, stdev=2633.47, samples=3 00:27:41.030 iops : min=15260, max=16552, avg=15832.67, stdev=658.37, samples=3 00:27:41.030 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:41.030 lat (msec) : 2=0.20%, 4=62.21%, 10=37.55% 00:27:41.030 cpu : usr=98.95%, sys=0.05%, ctx=5, majf=0, minf=608 00:27:41.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:41.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:41.030 issued rwts: total=32040,32103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:41.030 00:27:41.030 Run status group 0 (all jobs): 00:27:41.030 READ: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:27:41.030 WRITE: bw=62.7MiB/s (65.7MB/s), 62.7MiB/s-62.7MiB/s (65.7MB/s-65.7MB/s), io=125MiB (131MB), run=2001-2001msec 00:27:41.030 ----------------------------------------------------- 00:27:41.030 Suppressions used: 00:27:41.030 count bytes template 00:27:41.030 1 32 /usr/src/fio/parse.c 00:27:41.030 1 8 libtcmalloc_minimal.so 00:27:41.030 ----------------------------------------------------- 00:27:41.030 00:27:41.288 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:41.288 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:41.288 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:27:41.288 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:41.547 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:41.547 06:54:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:27:41.806 06:54:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:27:41.806 06:54:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:41.806 06:54:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:27:41.806 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:41.806 fio-3.35 00:27:41.806 Starting 1 thread 00:27:45.985 00:27:45.985 test: (groupid=0, jobs=1): err= 0: pid=65711: Fri Dec 6 06:54:18 2024 00:27:45.985 read: IOPS=16.1k, BW=62.9MiB/s (66.0MB/s)(126MiB/2001msec) 00:27:45.985 slat (nsec): min=4537, max=69144, avg=6056.55, stdev=2036.97 00:27:45.985 clat (usec): min=227, max=7989, avg=3955.22, stdev=708.56 00:27:45.985 lat (usec): min=232, max=7995, avg=3961.28, stdev=709.47 00:27:45.985 clat percentiles (usec): 00:27:45.985 | 1.00th=[ 2999], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:27:45.985 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:45.985 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 4686], 95.00th=[ 5866], 00:27:45.985 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 7701], 00:27:45.985 | 99.99th=[ 7767] 00:27:45.985 bw ( KiB/s): min=57648, max=67760, per=98.06%, avg=63162.67, stdev=5118.03, samples=3 00:27:45.985 iops : min=14412, max=16940, avg=15790.67, stdev=1279.51, samples=3 00:27:45.985 write: IOPS=16.1k, BW=63.0MiB/s (66.1MB/s)(126MiB/2001msec); 0 zone resets 00:27:45.985 slat (nsec): min=4664, max=92530, avg=6140.44, stdev=2007.36 00:27:45.985 clat (usec): min=253, max=7902, avg=3955.92, stdev=700.46 00:27:45.985 lat (usec): min=259, max=7908, avg=3962.06, stdev=701.34 00:27:45.985 clat percentiles (usec): 00:27:45.985 | 1.00th=[ 3032], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:27:45.985 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:45.985 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 4686], 95.00th=[ 5866], 00:27:45.985 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7635], 00:27:45.985 | 99.99th=[ 7767] 00:27:45.985 bw ( KiB/s): min=57936, max=66944, per=97.40%, avg=62864.00, stdev=4563.48, samples=3 00:27:45.985 iops : min=14484, max=16736, avg=15716.00, stdev=1140.87, samples=3 00:27:45.985 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:45.985 lat (msec) : 2=0.09%, 4=76.39%, 10=23.49% 00:27:45.985 cpu : usr=98.75%, sys=0.20%, ctx=9, majf=0, minf=607 00:27:45.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:45.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:45.985 issued rwts: total=32223,32288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:45.985 00:27:45.985 Run status group 0 (all jobs): 00:27:45.985 READ: bw=62.9MiB/s (66.0MB/s), 62.9MiB/s-62.9MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:27:45.985 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=126MiB (132MB), run=2001-2001msec 00:27:46.244 ----------------------------------------------------- 00:27:46.244 Suppressions used: 00:27:46.244 count bytes template 00:27:46.244 1 32 /usr/src/fio/parse.c 00:27:46.244 1 8 libtcmalloc_minimal.so 00:27:46.244 ----------------------------------------------------- 00:27:46.244 00:27:46.244 ************************************ 00:27:46.244 END TEST nvme_fio 00:27:46.244 ************************************ 00:27:46.244 06:54:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:46.244 06:54:18 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:27:46.244 00:27:46.244 real 0m17.542s 00:27:46.244 user 0m13.890s 00:27:46.244 sys 0m2.852s 00:27:46.244 06:54:18 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.244 06:54:18 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:27:46.244 00:27:46.244 real 1m31.352s 00:27:46.244 user 3m46.088s 00:27:46.244 sys 0m14.892s 00:27:46.244 06:54:18 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.244 ************************************ 00:27:46.244 END TEST nvme 00:27:46.244 06:54:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:46.244 ************************************ 00:27:46.244 06:54:18 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:27:46.244 06:54:18 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:46.244 06:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:46.244 06:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.244 06:54:18 -- common/autotest_common.sh@10 -- # set +x 00:27:46.245 ************************************ 00:27:46.245 START TEST nvme_scc 00:27:46.245 ************************************ 00:27:46.245 06:54:18 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:46.504 * Looking for test storage... 00:27:46.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@345 -- # : 1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.504 06:54:18 nvme_scc -- scripts/common.sh@368 -- # return 0 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.504 --rc genhtml_branch_coverage=1 00:27:46.504 --rc genhtml_function_coverage=1 00:27:46.504 --rc genhtml_legend=1 00:27:46.504 --rc geninfo_all_blocks=1 00:27:46.504 --rc geninfo_unexecuted_blocks=1 00:27:46.504 00:27:46.504 ' 00:27:46.504 06:54:18 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.504 --rc genhtml_branch_coverage=1 00:27:46.504 --rc genhtml_function_coverage=1 00:27:46.504 --rc genhtml_legend=1 00:27:46.504 --rc geninfo_all_blocks=1 00:27:46.505 --rc geninfo_unexecuted_blocks=1 00:27:46.505 00:27:46.505 ' 00:27:46.505 06:54:18 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:46.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.505 --rc genhtml_branch_coverage=1 00:27:46.505 --rc genhtml_function_coverage=1 00:27:46.505 --rc genhtml_legend=1 00:27:46.505 --rc geninfo_all_blocks=1 00:27:46.505 --rc geninfo_unexecuted_blocks=1 00:27:46.505 00:27:46.505 ' 00:27:46.505 06:54:18 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:46.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.505 --rc genhtml_branch_coverage=1 00:27:46.505 --rc genhtml_function_coverage=1 00:27:46.505 --rc genhtml_legend=1 00:27:46.505 --rc geninfo_all_blocks=1 00:27:46.505 --rc geninfo_unexecuted_blocks=1 00:27:46.505 00:27:46.505 ' 00:27:46.505 06:54:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:46.505 06:54:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:46.505 06:54:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:46.505 06:54:19 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.505 06:54:19 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.505 06:54:19 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.505 06:54:19 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.505 06:54:19 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.505 06:54:19 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.505 06:54:19 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.505 06:54:19 nvme_scc -- paths/export.sh@5 -- # export PATH 00:27:46.505 06:54:19 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:46.505 06:54:19 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:27:46.505 06:54:19 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:46.505 06:54:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:27:46.505 06:54:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:27:46.505 06:54:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:27:46.505 06:54:19 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:46.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:47.022 Waiting for block devices as requested 00:27:47.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.281 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.281 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.598 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:52.598 06:54:24 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:27:52.598 06:54:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:52.598 06:54:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:52.598 06:54:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:52.598 06:54:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.598 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.599 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:52.600 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.601 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.602 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:27:52.603 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:27:52.604 06:54:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.604 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:27:52.605 06:54:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:52.605 06:54:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:52.605 06:54:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:52.605 06:54:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:27:52.605 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.606 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:52.607 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.608 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.609 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:27:52.610 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:27:52.874 06:54:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:52.874 06:54:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:27:52.874 06:54:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:52.874 06:54:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.874 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:27:52.875 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.876 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.877 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:27:52.878 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.879 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:27:52.880 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.881 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:27:52.882 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.883 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:52.884 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:27:53.145 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.146 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.147 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:27:53.148 06:54:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:53.148 06:54:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:27:53.148 06:54:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:53.148 06:54:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.148 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:27:53.149 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:27:53.150 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:27:53.151 06:54:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:27:53.151 06:54:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:27:53.151 06:54:25 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:53.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.975 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.975 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:54.234 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:54.234 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:54.234 06:54:26 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:27:54.234 06:54:26 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:54.234 06:54:26 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.234 06:54:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:27:54.234 ************************************ 00:27:54.234 START TEST nvme_simple_copy 00:27:54.234 ************************************ 00:27:54.234 06:54:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:27:54.492 Initializing NVMe Controllers 00:27:54.492 Attaching to 0000:00:10.0 00:27:54.492 Controller supports SCC. Attached to 0000:00:10.0 00:27:54.492 Namespace ID: 1 size: 6GB 00:27:54.492 Initialization complete. 00:27:54.492 00:27:54.492 Controller QEMU NVMe Ctrl (12340 ) 00:27:54.492 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:27:54.492 Namespace Block Size:4096 00:27:54.492 Writing LBAs 0 to 63 with Random Data 00:27:54.492 Copied LBAs from 0 - 63 to the Destination LBA 256 00:27:54.492 LBAs matching Written Data: 64 00:27:54.492 00:27:54.492 real 0m0.301s 00:27:54.492 user 0m0.116s 00:27:54.492 sys 0m0.083s 00:27:54.492 06:54:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.492 ************************************ 00:27:54.492 06:54:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:27:54.492 END TEST nvme_simple_copy 00:27:54.492 ************************************ 00:27:54.492 00:27:54.492 real 0m8.232s 00:27:54.492 user 0m1.509s 00:27:54.492 sys 0m1.631s 00:27:54.492 06:54:27 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.492 06:54:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:27:54.492 ************************************ 00:27:54.492 END TEST nvme_scc 00:27:54.492 ************************************ 00:27:54.492 06:54:27 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:27:54.492 06:54:27 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:27:54.492 06:54:27 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:27:54.751 06:54:27 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:27:54.751 06:54:27 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:27:54.751 06:54:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:54.751 06:54:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.751 06:54:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.751 ************************************ 00:27:54.751 START TEST nvme_fdp 00:27:54.751 ************************************ 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:27:54.751 * Looking for test storage... 00:27:54.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.751 06:54:27 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.751 --rc genhtml_branch_coverage=1 00:27:54.751 --rc genhtml_function_coverage=1 00:27:54.751 --rc genhtml_legend=1 00:27:54.751 --rc geninfo_all_blocks=1 00:27:54.751 --rc geninfo_unexecuted_blocks=1 00:27:54.751 00:27:54.751 ' 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.751 --rc genhtml_branch_coverage=1 00:27:54.751 --rc genhtml_function_coverage=1 00:27:54.751 --rc genhtml_legend=1 00:27:54.751 --rc geninfo_all_blocks=1 00:27:54.751 --rc geninfo_unexecuted_blocks=1 00:27:54.751 00:27:54.751 ' 00:27:54.751 06:54:27 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.751 --rc genhtml_branch_coverage=1 00:27:54.751 --rc genhtml_function_coverage=1 00:27:54.751 --rc genhtml_legend=1 00:27:54.752 --rc geninfo_all_blocks=1 00:27:54.752 --rc geninfo_unexecuted_blocks=1 00:27:54.752 00:27:54.752 ' 00:27:54.752 06:54:27 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.752 --rc genhtml_branch_coverage=1 00:27:54.752 --rc genhtml_function_coverage=1 00:27:54.752 --rc genhtml_legend=1 00:27:54.752 --rc geninfo_all_blocks=1 00:27:54.752 --rc geninfo_unexecuted_blocks=1 00:27:54.752 00:27:54.752 ' 00:27:54.752 06:54:27 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:54.752 06:54:27 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.752 06:54:27 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.752 06:54:27 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.752 06:54:27 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.752 06:54:27 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.752 06:54:27 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.752 06:54:27 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.752 06:54:27 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:27:54.752 06:54:27 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:54.752 06:54:27 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:27:54.752 06:54:27 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:54.752 06:54:27 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:55.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.318 Waiting for block devices as requested 00:27:55.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:55.576 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:55.576 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:55.577 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:00.846 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:00.846 06:54:33 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:28:00.846 06:54:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:28:00.846 06:54:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:00.846 06:54:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:00.846 06:54:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:28:00.846 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:28:00.847 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.848 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:28:00.849 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.850 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.851 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:28:00.852 06:54:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:28:00.852 06:54:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:00.852 06:54:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:00.852 06:54:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.852 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:28:00.853 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.854 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.855 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:00.856 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.857 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:00.858 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:28:01.119 06:54:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:28:01.119 06:54:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:28:01.119 06:54:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:01.119 06:54:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:28:01.119 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.120 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.121 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.122 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.123 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:28:01.124 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.125 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:28:01.126 06:54:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:28:01.126 06:54:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:28:01.126 06:54:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:01.126 06:54:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:28:01.126 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.384 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.384 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:01.384 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:01.384 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:28:01.385 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:28:01.386 06:54:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:28:01.387 06:54:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:28:01.387 06:54:33 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:28:01.387 06:54:33 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:01.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:02.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:02.214 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:28:02.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:02.472 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:28:02.472 06:54:34 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:28:02.472 06:54:34 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:02.472 06:54:34 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.472 06:54:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:28:02.472 ************************************ 00:28:02.472 START TEST nvme_flexible_data_placement 00:28:02.472 ************************************ 00:28:02.472 06:54:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:28:02.730 Initializing NVMe Controllers 00:28:02.730 Attaching to 0000:00:13.0 00:28:02.730 Controller supports FDP Attached to 0000:00:13.0 00:28:02.730 Namespace ID: 1 Endurance Group ID: 1 00:28:02.730 Initialization complete. 00:28:02.730 00:28:02.730 ================================== 00:28:02.730 == FDP tests for Namespace: #01 == 00:28:02.730 ================================== 00:28:02.730 00:28:02.730 Get Feature: FDP: 00:28:02.730 ================= 00:28:02.730 Enabled: Yes 00:28:02.730 FDP configuration Index: 0 00:28:02.730 00:28:02.730 FDP configurations log page 00:28:02.730 =========================== 00:28:02.730 Number of FDP configurations: 1 00:28:02.730 Version: 0 00:28:02.730 Size: 112 00:28:02.730 FDP Configuration Descriptor: 0 00:28:02.730 Descriptor Size: 96 00:28:02.730 Reclaim Group Identifier format: 2 00:28:02.730 FDP Volatile Write Cache: Not Present 00:28:02.730 FDP Configuration: Valid 00:28:02.730 Vendor Specific Size: 0 00:28:02.730 Number of Reclaim Groups: 2 00:28:02.730 Number of Recalim Unit Handles: 8 00:28:02.730 Max Placement Identifiers: 128 00:28:02.730 Number of Namespaces Suppprted: 256 00:28:02.730 Reclaim unit Nominal Size: 6000000 bytes 00:28:02.730 Estimated Reclaim Unit Time Limit: Not Reported 00:28:02.730 RUH Desc #000: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #001: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #002: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #003: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #004: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #005: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #006: RUH Type: Initially Isolated 00:28:02.730 RUH Desc #007: RUH Type: Initially Isolated 00:28:02.730 00:28:02.730 FDP reclaim unit handle usage log page 00:28:02.730 ====================================== 00:28:02.730 Number of Reclaim Unit Handles: 8 00:28:02.730 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:28:02.730 RUH Usage Desc #001: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #002: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #003: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #004: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #005: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #006: RUH Attributes: Unused 00:28:02.730 RUH Usage Desc #007: RUH Attributes: Unused 00:28:02.730 00:28:02.730 FDP statistics log page 00:28:02.730 ======================= 00:28:02.730 Host bytes with metadata written: 826126336 00:28:02.730 Media bytes with metadata written: 826228736 00:28:02.730 Media bytes erased: 0 00:28:02.730 00:28:02.730 FDP Reclaim unit handle status 00:28:02.730 ============================== 00:28:02.730 Number of RUHS descriptors: 2 00:28:02.730 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004c25 00:28:02.730 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:28:02.731 00:28:02.731 FDP write on placement id: 0 success 00:28:02.731 00:28:02.731 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:28:02.731 00:28:02.731 IO mgmt send: RUH update for Placement ID: #0 Success 00:28:02.731 00:28:02.731 Get Feature: FDP Events for Placement handle: #0 00:28:02.731 ======================== 00:28:02.731 Number of FDP Events: 6 00:28:02.731 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:28:02.731 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:28:02.731 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:28:02.731 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:28:02.731 FDP Event: #4 Type: Media Reallocated Enabled: No 00:28:02.731 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:28:02.731 00:28:02.731 FDP events log page 00:28:02.731 =================== 00:28:02.731 Number of FDP events: 1 00:28:02.731 FDP Event #0: 00:28:02.731 Event Type: RU Not Written to Capacity 00:28:02.731 Placement Identifier: Valid 00:28:02.731 NSID: Valid 00:28:02.731 Location: Valid 00:28:02.731 Placement Identifier: 0 00:28:02.731 Event Timestamp: a 00:28:02.731 Namespace Identifier: 1 00:28:02.731 Reclaim Group Identifier: 0 00:28:02.731 Reclaim Unit Handle Identifier: 0 00:28:02.731 00:28:02.731 FDP test passed 00:28:02.731 00:28:02.731 real 0m0.377s 00:28:02.731 user 0m0.178s 00:28:02.731 sys 0m0.096s 00:28:02.731 06:54:35 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.731 06:54:35 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 ************************************ 00:28:02.731 END TEST nvme_flexible_data_placement 00:28:02.731 ************************************ 00:28:02.988 00:28:02.989 real 0m8.262s 00:28:02.989 user 0m1.578s 00:28:02.989 sys 0m1.678s 00:28:02.989 06:54:35 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.989 ************************************ 00:28:02.989 06:54:35 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:28:02.989 END TEST nvme_fdp 00:28:02.989 ************************************ 00:28:02.989 06:54:35 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:28:02.989 06:54:35 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:02.989 06:54:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:02.989 06:54:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.989 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.989 ************************************ 00:28:02.989 START TEST nvme_rpc 00:28:02.989 ************************************ 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:02.989 * Looking for test storage... 00:28:02.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.989 06:54:35 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:02.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.989 --rc genhtml_branch_coverage=1 00:28:02.989 --rc genhtml_function_coverage=1 00:28:02.989 --rc genhtml_legend=1 00:28:02.989 --rc geninfo_all_blocks=1 00:28:02.989 --rc geninfo_unexecuted_blocks=1 00:28:02.989 00:28:02.989 ' 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:02.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.989 --rc genhtml_branch_coverage=1 00:28:02.989 --rc genhtml_function_coverage=1 00:28:02.989 --rc genhtml_legend=1 00:28:02.989 --rc geninfo_all_blocks=1 00:28:02.989 --rc geninfo_unexecuted_blocks=1 00:28:02.989 00:28:02.989 ' 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:02.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.989 --rc genhtml_branch_coverage=1 00:28:02.989 --rc genhtml_function_coverage=1 00:28:02.989 --rc genhtml_legend=1 00:28:02.989 --rc geninfo_all_blocks=1 00:28:02.989 --rc geninfo_unexecuted_blocks=1 00:28:02.989 00:28:02.989 ' 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:02.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.989 --rc genhtml_branch_coverage=1 00:28:02.989 --rc genhtml_function_coverage=1 00:28:02.989 --rc genhtml_legend=1 00:28:02.989 --rc geninfo_all_blocks=1 00:28:02.989 --rc geninfo_unexecuted_blocks=1 00:28:02.989 00:28:02.989 ' 00:28:02.989 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:02.989 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:02.989 06:54:35 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:28:03.254 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:28:03.254 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67091 00:28:03.254 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:03.254 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:28:03.254 06:54:35 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67091 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67091 ']' 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.254 06:54:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:03.254 [2024-12-06 06:54:35.723501] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:28:03.254 [2024-12-06 06:54:35.723681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67091 ] 00:28:03.519 [2024-12-06 06:54:35.934219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:03.519 [2024-12-06 06:54:36.062190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.519 [2024-12-06 06:54:36.062207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.454 06:54:36 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.454 06:54:36 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:04.454 06:54:36 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:28:04.712 Nvme0n1 00:28:04.712 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:28:04.712 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:28:04.970 request: 00:28:04.970 { 00:28:04.970 "bdev_name": "Nvme0n1", 00:28:04.970 "filename": "non_existing_file", 00:28:04.970 "method": "bdev_nvme_apply_firmware", 00:28:04.970 "req_id": 1 00:28:04.970 } 00:28:04.970 Got JSON-RPC error response 00:28:04.970 response: 00:28:04.970 { 00:28:04.970 "code": -32603, 00:28:04.970 "message": "open file failed." 00:28:04.970 } 00:28:04.970 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:28:04.970 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:28:04.970 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:05.228 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:28:05.228 06:54:37 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67091 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67091 ']' 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67091 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67091 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.228 killing process with pid 67091 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67091' 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67091 00:28:05.228 06:54:37 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67091 00:28:07.768 00:28:07.768 real 0m4.336s 00:28:07.768 user 0m8.512s 00:28:07.768 sys 0m0.558s 00:28:07.768 06:54:39 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.768 06:54:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:07.768 ************************************ 00:28:07.768 END TEST nvme_rpc 00:28:07.768 ************************************ 00:28:07.768 06:54:39 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:07.768 06:54:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.768 06:54:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.768 06:54:39 -- common/autotest_common.sh@10 -- # set +x 00:28:07.768 ************************************ 00:28:07.768 START TEST nvme_rpc_timeouts 00:28:07.768 ************************************ 00:28:07.768 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:07.768 * Looking for test storage... 00:28:07.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:07.768 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.768 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.768 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.768 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.768 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.769 06:54:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.769 --rc genhtml_branch_coverage=1 00:28:07.769 --rc genhtml_function_coverage=1 00:28:07.769 --rc genhtml_legend=1 00:28:07.769 --rc geninfo_all_blocks=1 00:28:07.769 --rc geninfo_unexecuted_blocks=1 00:28:07.769 00:28:07.769 ' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.769 --rc genhtml_branch_coverage=1 00:28:07.769 --rc genhtml_function_coverage=1 00:28:07.769 --rc genhtml_legend=1 00:28:07.769 --rc geninfo_all_blocks=1 00:28:07.769 --rc geninfo_unexecuted_blocks=1 00:28:07.769 00:28:07.769 ' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.769 --rc genhtml_branch_coverage=1 00:28:07.769 --rc genhtml_function_coverage=1 00:28:07.769 --rc genhtml_legend=1 00:28:07.769 --rc geninfo_all_blocks=1 00:28:07.769 --rc geninfo_unexecuted_blocks=1 00:28:07.769 00:28:07.769 ' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.769 --rc genhtml_branch_coverage=1 00:28:07.769 --rc genhtml_function_coverage=1 00:28:07.769 --rc genhtml_legend=1 00:28:07.769 --rc geninfo_all_blocks=1 00:28:07.769 --rc geninfo_unexecuted_blocks=1 00:28:07.769 00:28:07.769 ' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67167 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67167 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67205 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:28:07.769 06:54:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67205 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67205 ']' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.769 06:54:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:28:07.769 [2024-12-06 06:54:40.094261] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:28:07.769 [2024-12-06 06:54:40.094499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67205 ] 00:28:07.769 [2024-12-06 06:54:40.293800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.033 [2024-12-06 06:54:40.467040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.033 [2024-12-06 06:54:40.467046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.968 Checking default timeout settings: 00:28:08.968 06:54:41 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.968 06:54:41 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:28:08.968 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:28:08.968 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:09.227 Making settings changes with rpc: 00:28:09.227 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:28:09.227 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:28:09.485 Check default vs. modified settings: 00:28:09.485 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:28:09.485 06:54:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:28:10.052 Setting action_on_timeout is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:28:10.052 Setting timeout_us is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:28:10.052 Setting timeout_admin_us is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67167 /tmp/settings_modified_67167 00:28:10.052 06:54:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67205 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67205 ']' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67205 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67205 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.052 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67205' 00:28:10.053 killing process with pid 67205 00:28:10.053 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67205 00:28:10.053 06:54:42 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67205 00:28:11.952 RPC TIMEOUT SETTING TEST PASSED. 00:28:11.952 06:54:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:28:11.952 00:28:11.952 real 0m4.770s 00:28:11.952 user 0m9.291s 00:28:11.952 sys 0m0.660s 00:28:11.952 06:54:44 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.952 06:54:44 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:28:11.952 ************************************ 00:28:11.952 END TEST nvme_rpc_timeouts 00:28:11.952 ************************************ 00:28:12.210 06:54:44 -- spdk/autotest.sh@239 -- # uname -s 00:28:12.210 06:54:44 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:28:12.210 06:54:44 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:28:12.210 06:54:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.210 06:54:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.210 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:28:12.210 ************************************ 00:28:12.210 START TEST sw_hotplug 00:28:12.210 ************************************ 00:28:12.210 06:54:44 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:28:12.210 * Looking for test storage... 00:28:12.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:12.210 06:54:44 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:12.210 06:54:44 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:12.210 06:54:44 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:28:12.210 06:54:44 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.210 06:54:44 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:28:12.468 06:54:44 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.468 06:54:44 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.468 06:54:44 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.468 06:54:44 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:28:12.468 06:54:44 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.468 06:54:44 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:12.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.468 --rc genhtml_branch_coverage=1 00:28:12.468 --rc genhtml_function_coverage=1 00:28:12.468 --rc genhtml_legend=1 00:28:12.468 --rc geninfo_all_blocks=1 00:28:12.468 --rc geninfo_unexecuted_blocks=1 00:28:12.468 00:28:12.468 ' 00:28:12.468 06:54:44 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:12.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.468 --rc genhtml_branch_coverage=1 00:28:12.468 --rc genhtml_function_coverage=1 00:28:12.468 --rc genhtml_legend=1 00:28:12.468 --rc geninfo_all_blocks=1 00:28:12.468 --rc geninfo_unexecuted_blocks=1 00:28:12.468 00:28:12.468 ' 00:28:12.468 06:54:44 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:12.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.468 --rc genhtml_branch_coverage=1 00:28:12.468 --rc genhtml_function_coverage=1 00:28:12.469 --rc genhtml_legend=1 00:28:12.469 --rc geninfo_all_blocks=1 00:28:12.469 --rc geninfo_unexecuted_blocks=1 00:28:12.469 00:28:12.469 ' 00:28:12.469 06:54:44 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:12.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.469 --rc genhtml_branch_coverage=1 00:28:12.469 --rc genhtml_function_coverage=1 00:28:12.469 --rc genhtml_legend=1 00:28:12.469 --rc geninfo_all_blocks=1 00:28:12.469 --rc geninfo_unexecuted_blocks=1 00:28:12.469 00:28:12.469 ' 00:28:12.469 06:54:44 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:12.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:12.726 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:12.726 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:12.726 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:12.726 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:12.726 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:28:12.726 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:28:12.726 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:28:12.726 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@233 -- # local class 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:12.726 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:28:12.983 06:54:45 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:28:12.983 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:28:12.983 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:28:12.983 06:54:45 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:13.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:13.240 Waiting for block devices as requested 00:28:13.504 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:13.504 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:13.504 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:13.761 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.028 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:19.028 06:54:51 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:28:19.028 06:54:51 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:19.028 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:28:19.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.285 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:28:19.543 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:28:19.800 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.800 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68073 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:28:19.800 06:54:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:28:19.800 06:54:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:28:20.057 Initializing NVMe Controllers 00:28:20.057 Attaching to 0000:00:10.0 00:28:20.057 Attaching to 0000:00:11.0 00:28:20.057 Attached to 0000:00:10.0 00:28:20.057 Attached to 0000:00:11.0 00:28:20.057 Initialization complete. Starting I/O... 00:28:20.057 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:28:20.057 QEMU NVMe Ctrl (12341 ): 8 I/Os completed (+8) 00:28:20.057 00:28:21.429 QEMU NVMe Ctrl (12340 ): 1262 I/Os completed (+1262) 00:28:21.429 QEMU NVMe Ctrl (12341 ): 1329 I/Os completed (+1321) 00:28:21.429 00:28:22.358 QEMU NVMe Ctrl (12340 ): 2906 I/Os completed (+1644) 00:28:22.358 QEMU NVMe Ctrl (12341 ): 2963 I/Os completed (+1634) 00:28:22.358 00:28:23.289 QEMU NVMe Ctrl (12340 ): 4749 I/Os completed (+1843) 00:28:23.289 QEMU NVMe Ctrl (12341 ): 4847 I/Os completed (+1884) 00:28:23.289 00:28:24.244 QEMU NVMe Ctrl (12340 ): 6290 I/Os completed (+1541) 00:28:24.244 QEMU NVMe Ctrl (12341 ): 6599 I/Os completed (+1752) 00:28:24.244 00:28:25.180 QEMU NVMe Ctrl (12340 ): 7858 I/Os completed (+1568) 00:28:25.180 QEMU NVMe Ctrl (12341 ): 8328 I/Os completed (+1729) 00:28:25.180 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:26.115 [2024-12-06 06:54:58.376745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:26.115 Controller removed: QEMU NVMe Ctrl (12340 ) 00:28:26.115 [2024-12-06 06:54:58.378864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.378935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.378966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.378991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:26.115 [2024-12-06 06:54:58.382008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.382074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.382103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.382127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:26.115 [2024-12-06 06:54:58.414246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:26.115 Controller removed: QEMU NVMe Ctrl (12341 ) 00:28:26.115 [2024-12-06 06:54:58.416450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.416522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.416562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.416591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:26.115 [2024-12-06 06:54:58.419748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.419825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.419862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 [2024-12-06 06:54:58.419886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:26.115 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:28:26.115 EAL: Scan for (pci) bus failed. 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:26.115 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:26.116 Attaching to 0000:00:10.0 00:28:26.116 Attached to 0000:00:10.0 00:28:26.116 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:28:26.116 00:28:26.116 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:26.374 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:26.374 06:54:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:26.374 Attaching to 0000:00:11.0 00:28:26.374 Attached to 0000:00:11.0 00:28:27.311 QEMU NVMe Ctrl (12340 ): 1816 I/Os completed (+1816) 00:28:27.311 QEMU NVMe Ctrl (12341 ): 1655 I/Os completed (+1655) 00:28:27.311 00:28:28.247 QEMU NVMe Ctrl (12340 ): 3531 I/Os completed (+1715) 00:28:28.247 QEMU NVMe Ctrl (12341 ): 3498 I/Os completed (+1843) 00:28:28.247 00:28:29.183 QEMU NVMe Ctrl (12340 ): 5131 I/Os completed (+1600) 00:28:29.183 QEMU NVMe Ctrl (12341 ): 5464 I/Os completed (+1966) 00:28:29.183 00:28:30.120 QEMU NVMe Ctrl (12340 ): 6656 I/Os completed (+1525) 00:28:30.120 QEMU NVMe Ctrl (12341 ): 7123 I/Os completed (+1659) 00:28:30.120 00:28:31.055 QEMU NVMe Ctrl (12340 ): 8265 I/Os completed (+1609) 00:28:31.055 QEMU NVMe Ctrl (12341 ): 8843 I/Os completed (+1720) 00:28:31.055 00:28:32.459 QEMU NVMe Ctrl (12340 ): 10036 I/Os completed (+1771) 00:28:32.459 QEMU NVMe Ctrl (12341 ): 10678 I/Os completed (+1835) 00:28:32.459 00:28:33.392 QEMU NVMe Ctrl (12340 ): 11624 I/Os completed (+1588) 00:28:33.392 QEMU NVMe Ctrl (12341 ): 12727 I/Os completed (+2049) 00:28:33.392 00:28:34.324 QEMU NVMe Ctrl (12340 ): 13284 I/Os completed (+1660) 00:28:34.324 QEMU NVMe Ctrl (12341 ): 14475 I/Os completed (+1748) 00:28:34.324 00:28:35.260 QEMU NVMe Ctrl (12340 ): 14941 I/Os completed (+1657) 00:28:35.260 QEMU NVMe Ctrl (12341 ): 16257 I/Os completed (+1782) 00:28:35.260 00:28:36.194 QEMU NVMe Ctrl (12340 ): 16559 I/Os completed (+1618) 00:28:36.194 QEMU NVMe Ctrl (12341 ): 18019 I/Os completed (+1762) 00:28:36.194 00:28:37.129 QEMU NVMe Ctrl (12340 ): 18082 I/Os completed (+1523) 00:28:37.129 QEMU NVMe Ctrl (12341 ): 19736 I/Os completed (+1717) 00:28:37.129 00:28:38.063 QEMU NVMe Ctrl (12340 ): 19698 I/Os completed (+1616) 00:28:38.063 QEMU NVMe Ctrl (12341 ): 21470 I/Os completed (+1734) 00:28:38.063 00:28:38.322 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:28:38.322 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:38.322 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:38.322 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:38.322 [2024-12-06 06:55:10.716945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:38.323 Controller removed: QEMU NVMe Ctrl (12340 ) 00:28:38.323 [2024-12-06 06:55:10.719912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.720175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.720245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.720307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:38.323 [2024-12-06 06:55:10.725167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.725240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.725273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.725301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:28:38.323 EAL: Scan for (pci) bus failed. 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:38.323 [2024-12-06 06:55:10.744976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:38.323 Controller removed: QEMU NVMe Ctrl (12341 ) 00:28:38.323 [2024-12-06 06:55:10.747161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.747361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.747431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.747471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:38.323 [2024-12-06 06:55:10.750535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.750598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.750627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 [2024-12-06 06:55:10.750650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:38.323 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_device 00:28:38.323 EAL: Scan for (pci) bus failed. 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:38.323 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:38.582 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:38.582 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:38.582 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:38.582 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:38.582 06:55:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:38.582 Attaching to 0000:00:10.0 00:28:38.582 Attached to 0000:00:10.0 00:28:38.582 06:55:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:38.582 06:55:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:38.582 06:55:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:38.582 Attaching to 0000:00:11.0 00:28:38.582 Attached to 0000:00:11.0 00:28:39.148 QEMU NVMe Ctrl (12340 ): 1209 I/Os completed (+1209) 00:28:39.148 QEMU NVMe Ctrl (12341 ): 1074 I/Os completed (+1074) 00:28:39.148 00:28:40.083 QEMU NVMe Ctrl (12340 ): 2725 I/Os completed (+1516) 00:28:40.083 QEMU NVMe Ctrl (12341 ): 2939 I/Os completed (+1865) 00:28:40.083 00:28:41.539 QEMU NVMe Ctrl (12340 ): 4293 I/Os completed (+1568) 00:28:41.539 QEMU NVMe Ctrl (12341 ): 4714 I/Os completed (+1775) 00:28:41.539 00:28:42.119 QEMU NVMe Ctrl (12340 ): 6171 I/Os completed (+1878) 00:28:42.119 QEMU NVMe Ctrl (12341 ): 6966 I/Os completed (+2252) 00:28:42.119 00:28:43.051 QEMU NVMe Ctrl (12340 ): 7849 I/Os completed (+1678) 00:28:43.051 QEMU NVMe Ctrl (12341 ): 8729 I/Os completed (+1763) 00:28:43.051 00:28:44.421 QEMU NVMe Ctrl (12340 ): 9421 I/Os completed (+1572) 00:28:44.421 QEMU NVMe Ctrl (12341 ): 10445 I/Os completed (+1716) 00:28:44.421 00:28:45.371 QEMU NVMe Ctrl (12340 ): 10975 I/Os completed (+1554) 00:28:45.371 QEMU NVMe Ctrl (12341 ): 12334 I/Os completed (+1889) 00:28:45.371 00:28:46.307 QEMU NVMe Ctrl (12340 ): 12763 I/Os completed (+1788) 00:28:46.307 QEMU NVMe Ctrl (12341 ): 14148 I/Os completed (+1814) 00:28:46.307 00:28:47.242 QEMU NVMe Ctrl (12340 ): 14327 I/Os completed (+1564) 00:28:47.242 QEMU NVMe Ctrl (12341 ): 15898 I/Os completed (+1750) 00:28:47.242 00:28:48.178 QEMU NVMe Ctrl (12340 ): 16018 I/Os completed (+1691) 00:28:48.178 QEMU NVMe Ctrl (12341 ): 17735 I/Os completed (+1837) 00:28:48.178 00:28:49.114 QEMU NVMe Ctrl (12340 ): 17712 I/Os completed (+1694) 00:28:49.114 QEMU NVMe Ctrl (12341 ): 19582 I/Os completed (+1847) 00:28:49.114 00:28:50.059 QEMU NVMe Ctrl (12340 ): 19304 I/Os completed (+1592) 00:28:50.059 QEMU NVMe Ctrl (12341 ): 21279 I/Os completed (+1697) 00:28:50.059 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:50.645 [2024-12-06 06:55:23.023386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:50.645 Controller removed: QEMU NVMe Ctrl (12340 ) 00:28:50.645 [2024-12-06 06:55:23.026212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.026378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.026474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.026748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:50.645 [2024-12-06 06:55:23.031719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.031972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.032190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.032392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:50.645 [2024-12-06 06:55:23.051430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:50.645 Controller removed: QEMU NVMe Ctrl (12341 ) 00:28:50.645 [2024-12-06 06:55:23.054003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.054236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.054419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.054585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:50.645 [2024-12-06 06:55:23.060423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.060557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.060653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 [2024-12-06 06:55:23.060835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:50.645 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:50.903 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:50.904 Attaching to 0000:00:10.0 00:28:50.904 Attached to 0000:00:10.0 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:50.904 06:55:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:50.904 Attaching to 0000:00:11.0 00:28:50.904 Attached to 0000:00:11.0 00:28:50.904 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:50.904 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:50.904 [2024-12-06 06:55:23.341173] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:29:03.104 06:55:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:29:03.104 06:55:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:03.105 06:55:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.96 00:29:03.105 06:55:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.96 00:29:03.105 06:55:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:29:03.105 06:55:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.96 00:29:03.105 06:55:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.96 2 00:29:03.105 remove_attach_helper took 42.96s to complete (handling 2 nvme drive(s)) 06:55:35 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68073 00:29:09.684 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68073) - No such process 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68073 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68623 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:29:09.684 06:55:41 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68623 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68623 ']' 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.684 06:55:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:09.684 [2024-12-06 06:55:41.468032] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:29:09.684 [2024-12-06 06:55:41.468390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68623 ] 00:29:09.684 [2024-12-06 06:55:41.656295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.684 [2024-12-06 06:55:41.780627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:29:10.251 06:55:42 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:29:10.251 06:55:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:16.815 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:16.815 06:55:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.815 06:55:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:16.815 06:55:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.815 [2024-12-06 06:55:48.673865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:16.815 [2024-12-06 06:55:48.676622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.815 [2024-12-06 06:55:48.676684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.815 [2024-12-06 06:55:48.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.815 [2024-12-06 06:55:48.676764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.815 [2024-12-06 06:55:48.676781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.815 [2024-12-06 06:55:48.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.815 [2024-12-06 06:55:48.676816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:48.676833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:48.676847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.816 [2024-12-06 06:55:48.676870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:48.676886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:48.676903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.816 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:29:16.816 06:55:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:16.816 06:55:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.816 06:55:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:16.816 06:55:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:29:16.816 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:16.816 [2024-12-06 06:55:49.273864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:16.816 [2024-12-06 06:55:49.276749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:49.276947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:49.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.816 [2024-12-06 06:55:49.277019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:49.277040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:49.277056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.816 [2024-12-06 06:55:49.277075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:49.277090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:49.277107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.816 [2024-12-06 06:55:49.277122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:16.816 [2024-12-06 06:55:49.277139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.816 [2024-12-06 06:55:49.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:17.382 06:55:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.382 06:55:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:17.382 06:55:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:17.382 06:55:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:17.662 06:55:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:29.865 06:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.865 [2024-12-06 06:56:02.274177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:29:29.865 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:29.865 [2024-12-06 06:56:02.277286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:29.865 [2024-12-06 06:56:02.277465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.865 [2024-12-06 06:56:02.277619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.865 [2024-12-06 06:56:02.277835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:29.865 [2024-12-06 06:56:02.278002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.865 [2024-12-06 06:56:02.278179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.865 [2024-12-06 06:56:02.278390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:29.865 [2024-12-06 06:56:02.278607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.865 [2024-12-06 06:56:02.278786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.865 [2024-12-06 06:56:02.279007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:29.865 [2024-12-06 06:56:02.279181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.865 [2024-12-06 06:56:02.279356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.434 [2024-12-06 06:56:02.774177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:30.434 [2024-12-06 06:56:02.777457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:30.434 [2024-12-06 06:56:02.777735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.434 [2024-12-06 06:56:02.777920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.434 [2024-12-06 06:56:02.778091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:30.434 [2024-12-06 06:56:02.778153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.434 [2024-12-06 06:56:02.778382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.434 [2024-12-06 06:56:02.778469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:30.434 [2024-12-06 06:56:02.778573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.434 [2024-12-06 06:56:02.778652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.434 [2024-12-06 06:56:02.778731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:30.434 [2024-12-06 06:56:02.778895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.434 [2024-12-06 06:56:02.778977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:30.434 06:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.434 06:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:30.434 06:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:30.434 06:56:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:30.692 06:56:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:42.907 06:56:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.907 [2024-12-06 06:56:15.274434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:42.907 [2024-12-06 06:56:15.277333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:42.907 [2024-12-06 06:56:15.277395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.907 [2024-12-06 06:56:15.277419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.907 [2024-12-06 06:56:15.277450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:42.907 [2024-12-06 06:56:15.277467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.907 [2024-12-06 06:56:15.277488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.907 [2024-12-06 06:56:15.277505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:42.907 [2024-12-06 06:56:15.277522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.907 [2024-12-06 06:56:15.277537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.907 [2024-12-06 06:56:15.277555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:42.907 [2024-12-06 06:56:15.277570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.907 [2024-12-06 06:56:15.277587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:29:42.907 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:43.166 [2024-12-06 06:56:15.674423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:43.166 [2024-12-06 06:56:15.677510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:43.166 [2024-12-06 06:56:15.677810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.166 [2024-12-06 06:56:15.677858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.166 [2024-12-06 06:56:15.677891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:43.166 [2024-12-06 06:56:15.677917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.166 [2024-12-06 06:56:15.677934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.166 [2024-12-06 06:56:15.677957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:43.166 [2024-12-06 06:56:15.677973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.166 [2024-12-06 06:56:15.677999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.166 [2024-12-06 06:56:15.678015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:43.166 [2024-12-06 06:56:15.678033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.166 [2024-12-06 06:56:15.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:43.424 06:56:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.424 06:56:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:43.424 06:56:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:43.424 06:56:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:43.682 06:56:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.61 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.61 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.61 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.61 2 00:29:55.879 remove_attach_helper took 45.61s to complete (handling 2 nvme drive(s)) 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:55.879 06:56:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.879 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:29:55.880 06:56:28 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:29:55.880 06:56:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.442 [2024-12-06 06:56:34.317062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:30:02.442 [2024-12-06 06:56:34.319067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.319126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.319151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.319183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.319200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.319218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.319234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.319252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.319266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.319284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.319299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.319320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:02.442 06:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:30:02.442 06:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:02.442 [2024-12-06 06:56:34.917077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:30:02.442 [2024-12-06 06:56:34.919137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.919208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.919236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.919265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.919285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.919301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.919319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.919334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.919352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.442 [2024-12-06 06:56:34.919382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:02.442 [2024-12-06 06:56:34.919399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.442 [2024-12-06 06:56:34.919414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:03.017 06:56:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.017 06:56:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:03.017 06:56:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:03.017 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:03.274 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:03.275 06:56:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:15.477 [2024-12-06 06:56:47.817267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:30:15.477 [2024-12-06 06:56:47.819383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:15.477 [2024-12-06 06:56:47.819560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.477 [2024-12-06 06:56:47.819756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.477 [2024-12-06 06:56:47.819962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:15.477 [2024-12-06 06:56:47.820088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.477 [2024-12-06 06:56:47.820263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.477 [2024-12-06 06:56:47.820450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:15.477 [2024-12-06 06:56:47.820614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.477 [2024-12-06 06:56:47.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.477 [2024-12-06 06:56:47.821028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:15.477 [2024-12-06 06:56:47.821195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.477 [2024-12-06 06:56:47.821358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.477 06:56:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:30:15.477 06:56:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:16.045 06:56:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.045 06:56:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:16.045 06:56:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:30:16.045 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:16.045 [2024-12-06 06:56:48.417277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:30:16.045 [2024-12-06 06:56:48.419206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:16.045 [2024-12-06 06:56:48.419412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.045 [2024-12-06 06:56:48.419459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.045 [2024-12-06 06:56:48.419490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:16.045 [2024-12-06 06:56:48.419513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.045 [2024-12-06 06:56:48.419529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.045 [2024-12-06 06:56:48.419549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:16.045 [2024-12-06 06:56:48.419564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.045 [2024-12-06 06:56:48.419580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.045 [2024-12-06 06:56:48.419596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:16.045 [2024-12-06 06:56:48.419613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.045 [2024-12-06 06:56:48.419628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.045 [2024-12-06 06:56:48.419651] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:30:16.045 [2024-12-06 06:56:48.419669] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:30:16.045 [2024-12-06 06:56:48.419685] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:30:16.045 [2024-12-06 06:56:48.419698] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:16.612 06:56:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:16.612 06:56:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:16.612 06:56:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:16.612 06:56:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:16.612 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:16.870 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:16.870 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:16.870 06:56:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:29.102 06:57:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.102 [2024-12-06 06:57:01.417506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:30:29.102 [2024-12-06 06:57:01.419637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.102 [2024-12-06 06:57:01.419828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.102 [2024-12-06 06:57:01.420006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.102 [2024-12-06 06:57:01.420178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.102 [2024-12-06 06:57:01.420346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.102 [2024-12-06 06:57:01.420540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.102 [2024-12-06 06:57:01.420730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.102 [2024-12-06 06:57:01.420916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.102 [2024-12-06 06:57:01.421099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.102 [2024-12-06 06:57:01.421307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.102 [2024-12-06 06:57:01.421477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.102 [2024-12-06 06:57:01.421636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:30:29.102 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:29.360 [2024-12-06 06:57:01.817506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:30:29.360 [2024-12-06 06:57:01.819654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.360 [2024-12-06 06:57:01.819853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.360 [2024-12-06 06:57:01.820037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.361 [2024-12-06 06:57:01.820207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.361 [2024-12-06 06:57:01.820341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.361 [2024-12-06 06:57:01.820521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.361 [2024-12-06 06:57:01.820676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.361 [2024-12-06 06:57:01.820877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.361 [2024-12-06 06:57:01.821058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.361 [2024-12-06 06:57:01.821307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.361 [2024-12-06 06:57:01.821482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.361 [2024-12-06 06:57:01.821635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:29.361 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:29.361 06:57:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.361 06:57:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:29.361 06:57:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.618 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:29.618 06:57:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:29.618 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:29.619 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:29.876 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:29.876 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:29.876 06:57:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.10 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.10 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.10 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.10 2 00:30:42.174 remove_attach_helper took 46.10s to complete (handling 2 nvme drive(s)) 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:30:42.174 06:57:14 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68623 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68623 ']' 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68623 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68623 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68623' 00:30:42.174 killing process with pid 68623 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68623 00:30:42.174 06:57:14 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68623 00:30:44.074 06:57:16 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:44.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:44.899 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:44.899 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:44.899 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:44.899 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:44.899 00:30:44.899 real 2m32.812s 00:30:44.899 user 1m52.515s 00:30:44.899 sys 0m20.082s 00:30:44.899 06:57:17 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.899 06:57:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:44.899 ************************************ 00:30:44.899 END TEST sw_hotplug 00:30:44.899 ************************************ 00:30:44.899 06:57:17 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:30:44.899 06:57:17 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:30:44.899 06:57:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.899 06:57:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.899 06:57:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.899 ************************************ 00:30:44.899 START TEST nvme_xnvme 00:30:44.899 ************************************ 00:30:44.899 06:57:17 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:30:45.170 * Looking for test storage... 00:30:45.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.170 06:57:17 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.170 --rc genhtml_branch_coverage=1 00:30:45.170 --rc genhtml_function_coverage=1 00:30:45.170 --rc genhtml_legend=1 00:30:45.170 --rc geninfo_all_blocks=1 00:30:45.170 --rc geninfo_unexecuted_blocks=1 00:30:45.170 00:30:45.170 ' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.170 --rc genhtml_branch_coverage=1 00:30:45.170 --rc genhtml_function_coverage=1 00:30:45.170 --rc genhtml_legend=1 00:30:45.170 --rc geninfo_all_blocks=1 00:30:45.170 --rc geninfo_unexecuted_blocks=1 00:30:45.170 00:30:45.170 ' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.170 --rc genhtml_branch_coverage=1 00:30:45.170 --rc genhtml_function_coverage=1 00:30:45.170 --rc genhtml_legend=1 00:30:45.170 --rc geninfo_all_blocks=1 00:30:45.170 --rc geninfo_unexecuted_blocks=1 00:30:45.170 00:30:45.170 ' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.170 --rc genhtml_branch_coverage=1 00:30:45.170 --rc genhtml_function_coverage=1 00:30:45.170 --rc genhtml_legend=1 00:30:45.170 --rc geninfo_all_blocks=1 00:30:45.170 --rc geninfo_unexecuted_blocks=1 00:30:45.170 00:30:45.170 ' 00:30:45.170 06:57:17 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:30:45.170 06:57:17 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:30:45.170 06:57:17 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:30:45.170 06:57:17 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:45.170 06:57:17 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:30:45.171 06:57:17 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:30:45.171 06:57:17 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:30:45.171 06:57:17 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:45.171 #define SPDK_CONFIG_H 00:30:45.171 #define SPDK_CONFIG_AIO_FSDEV 1 00:30:45.171 #define SPDK_CONFIG_APPS 1 00:30:45.171 #define SPDK_CONFIG_ARCH native 00:30:45.171 #define SPDK_CONFIG_ASAN 1 00:30:45.171 #undef SPDK_CONFIG_AVAHI 00:30:45.171 #undef SPDK_CONFIG_CET 00:30:45.171 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:30:45.171 #define SPDK_CONFIG_COVERAGE 1 00:30:45.171 #define SPDK_CONFIG_CROSS_PREFIX 00:30:45.171 #undef SPDK_CONFIG_CRYPTO 00:30:45.171 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:45.171 #undef SPDK_CONFIG_CUSTOMOCF 00:30:45.171 #undef SPDK_CONFIG_DAOS 00:30:45.171 #define SPDK_CONFIG_DAOS_DIR 00:30:45.171 #define SPDK_CONFIG_DEBUG 1 00:30:45.171 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:45.171 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:45.171 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:45.171 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:45.171 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:45.171 #undef SPDK_CONFIG_DPDK_UADK 00:30:45.171 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:45.171 #define SPDK_CONFIG_EXAMPLES 1 00:30:45.171 #undef SPDK_CONFIG_FC 00:30:45.171 #define SPDK_CONFIG_FC_PATH 00:30:45.171 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:45.171 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:45.171 #define SPDK_CONFIG_FSDEV 1 00:30:45.171 #undef SPDK_CONFIG_FUSE 00:30:45.171 #undef SPDK_CONFIG_FUZZER 00:30:45.171 #define SPDK_CONFIG_FUZZER_LIB 00:30:45.171 #undef SPDK_CONFIG_GOLANG 00:30:45.171 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:45.171 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:45.171 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:45.171 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:45.171 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:45.171 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:45.171 #undef SPDK_CONFIG_HAVE_LZ4 00:30:45.171 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:30:45.171 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:30:45.171 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:45.171 #define SPDK_CONFIG_IDXD 1 00:30:45.171 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:45.171 #undef SPDK_CONFIG_IPSEC_MB 00:30:45.171 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:45.171 #define SPDK_CONFIG_ISAL 1 00:30:45.172 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:45.172 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:45.172 #define SPDK_CONFIG_LIBDIR 00:30:45.172 #undef SPDK_CONFIG_LTO 00:30:45.172 #define SPDK_CONFIG_MAX_LCORES 128 00:30:45.172 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:30:45.172 #define SPDK_CONFIG_NVME_CUSE 1 00:30:45.172 #undef SPDK_CONFIG_OCF 00:30:45.172 #define SPDK_CONFIG_OCF_PATH 00:30:45.172 #define SPDK_CONFIG_OPENSSL_PATH 00:30:45.172 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:45.172 #define SPDK_CONFIG_PGO_DIR 00:30:45.172 #undef SPDK_CONFIG_PGO_USE 00:30:45.172 #define SPDK_CONFIG_PREFIX /usr/local 00:30:45.172 #undef SPDK_CONFIG_RAID5F 00:30:45.172 #undef SPDK_CONFIG_RBD 00:30:45.172 #define SPDK_CONFIG_RDMA 1 00:30:45.172 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:45.172 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:45.172 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:45.172 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:45.172 #define SPDK_CONFIG_SHARED 1 00:30:45.172 #undef SPDK_CONFIG_SMA 00:30:45.172 #define SPDK_CONFIG_TESTS 1 00:30:45.172 #undef SPDK_CONFIG_TSAN 00:30:45.172 #define SPDK_CONFIG_UBLK 1 00:30:45.172 #define SPDK_CONFIG_UBSAN 1 00:30:45.172 #undef SPDK_CONFIG_UNIT_TESTS 00:30:45.172 #undef SPDK_CONFIG_URING 00:30:45.172 #define SPDK_CONFIG_URING_PATH 00:30:45.172 #undef SPDK_CONFIG_URING_ZNS 00:30:45.172 #undef SPDK_CONFIG_USDT 00:30:45.172 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:45.172 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:45.172 #undef SPDK_CONFIG_VFIO_USER 00:30:45.172 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:45.172 #define SPDK_CONFIG_VHOST 1 00:30:45.172 #define SPDK_CONFIG_VIRTIO 1 00:30:45.172 #undef SPDK_CONFIG_VTUNE 00:30:45.172 #define SPDK_CONFIG_VTUNE_DIR 00:30:45.172 #define SPDK_CONFIG_WERROR 1 00:30:45.172 #define SPDK_CONFIG_WPDK_DIR 00:30:45.172 #define SPDK_CONFIG_XNVME 1 00:30:45.172 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:45.172 06:57:17 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.172 06:57:17 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.172 06:57:17 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.172 06:57:17 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.172 06:57:17 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.172 06:57:17 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.172 06:57:17 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.172 06:57:17 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.172 06:57:17 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:30:45.172 06:57:17 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@68 -- # uname -s 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:30:45.172 06:57:17 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:30:45.172 06:57:17 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:30:45.173 06:57:17 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69992 ]] 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69992 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Z7Chbw 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Z7Chbw/tests/xnvme /tmp/spdk.Z7Chbw 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976948736 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591166976 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976948736 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591166976 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96423264256 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3279515648 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:30:45.174 * Looking for test storage... 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:30:45.174 06:57:17 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976948736 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.434 06:57:17 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.434 06:57:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.434 --rc genhtml_branch_coverage=1 00:30:45.434 --rc genhtml_function_coverage=1 00:30:45.434 --rc genhtml_legend=1 00:30:45.434 --rc geninfo_all_blocks=1 00:30:45.434 --rc geninfo_unexecuted_blocks=1 00:30:45.434 00:30:45.434 ' 00:30:45.435 06:57:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.435 --rc genhtml_branch_coverage=1 00:30:45.435 --rc genhtml_function_coverage=1 00:30:45.435 --rc genhtml_legend=1 00:30:45.435 --rc geninfo_all_blocks=1 00:30:45.435 --rc geninfo_unexecuted_blocks=1 00:30:45.435 00:30:45.435 ' 00:30:45.435 06:57:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.435 --rc genhtml_branch_coverage=1 00:30:45.435 --rc genhtml_function_coverage=1 00:30:45.435 --rc genhtml_legend=1 00:30:45.435 --rc geninfo_all_blocks=1 00:30:45.435 --rc geninfo_unexecuted_blocks=1 00:30:45.435 00:30:45.435 ' 00:30:45.435 06:57:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.435 --rc genhtml_branch_coverage=1 00:30:45.435 --rc genhtml_function_coverage=1 00:30:45.435 --rc genhtml_legend=1 00:30:45.435 --rc geninfo_all_blocks=1 00:30:45.435 --rc geninfo_unexecuted_blocks=1 00:30:45.435 00:30:45.435 ' 00:30:45.435 06:57:17 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.435 06:57:17 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.435 06:57:17 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.435 06:57:17 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.435 06:57:17 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.435 06:57:17 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.435 06:57:17 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.435 06:57:17 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.435 06:57:17 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:30:45.435 06:57:17 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:30:45.435 06:57:17 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:45.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.953 Waiting for block devices as requested 00:30:45.953 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.953 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:46.212 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:46.212 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:51.575 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:51.575 06:57:23 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:30:51.575 06:57:24 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:30:51.575 06:57:24 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:30:51.832 06:57:24 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:30:51.832 No valid GPT data, bailing 00:30:51.832 06:57:24 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:30:51.832 06:57:24 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:30:51.832 06:57:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:30:51.832 06:57:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:51.832 06:57:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.833 06:57:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:51.833 ************************************ 00:30:51.833 START TEST xnvme_rpc 00:30:51.833 ************************************ 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70395 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70395 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70395 ']' 00:30:51.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.833 06:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:52.090 [2024-12-06 06:57:24.527336] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:30:52.090 [2024-12-06 06:57:24.527668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70395 ] 00:30:52.349 [2024-12-06 06:57:24.720840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.349 [2024-12-06 06:57:24.847422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.280 xnvme_bdev 00:30:53.280 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.281 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70395 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70395 ']' 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70395 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70395 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.538 killing process with pid 70395 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70395' 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70395 00:30:53.538 06:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70395 00:30:56.071 00:30:56.071 real 0m3.655s 00:30:56.071 user 0m3.905s 00:30:56.071 sys 0m0.437s 00:30:56.071 06:57:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.071 06:57:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:56.071 ************************************ 00:30:56.071 END TEST xnvme_rpc 00:30:56.071 ************************************ 00:30:56.071 06:57:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:30:56.071 06:57:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:56.071 06:57:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.071 06:57:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:56.071 ************************************ 00:30:56.071 START TEST xnvme_bdevperf 00:30:56.071 ************************************ 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:56.071 06:57:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.071 { 00:30:56.071 "subsystems": [ 00:30:56.071 { 00:30:56.071 "subsystem": "bdev", 00:30:56.071 "config": [ 00:30:56.071 { 00:30:56.071 "params": { 00:30:56.071 "io_mechanism": "libaio", 00:30:56.071 "conserve_cpu": false, 00:30:56.071 "filename": "/dev/nvme0n1", 00:30:56.071 "name": "xnvme_bdev" 00:30:56.071 }, 00:30:56.071 "method": "bdev_xnvme_create" 00:30:56.071 }, 00:30:56.071 { 00:30:56.071 "method": "bdev_wait_for_examine" 00:30:56.071 } 00:30:56.071 ] 00:30:56.071 } 00:30:56.071 ] 00:30:56.071 } 00:30:56.071 [2024-12-06 06:57:28.197985] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:30:56.071 [2024-12-06 06:57:28.198132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70473 ] 00:30:56.071 [2024-12-06 06:57:28.379041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.071 [2024-12-06 06:57:28.509008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.329 Running I/O for 5 seconds... 00:30:58.639 24528.00 IOPS, 95.81 MiB/s [2024-12-06T06:57:32.167Z] 25928.00 IOPS, 101.28 MiB/s [2024-12-06T06:57:33.102Z] 25182.33 IOPS, 98.37 MiB/s [2024-12-06T06:57:34.035Z] 24677.00 IOPS, 96.39 MiB/s 00:31:01.445 Latency(us) 00:31:01.445 [2024-12-06T06:57:34.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.445 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:31:01.445 xnvme_bdev : 5.01 24347.59 95.11 0.00 0.00 2621.62 301.61 7149.38 00:31:01.445 [2024-12-06T06:57:34.036Z] =================================================================================================================== 00:31:01.445 [2024-12-06T06:57:34.036Z] Total : 24347.59 95.11 0.00 0.00 2621.62 301.61 7149.38 00:31:02.820 06:57:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:02.820 06:57:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:02.820 06:57:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:31:02.820 06:57:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:02.820 06:57:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.820 { 00:31:02.820 "subsystems": [ 00:31:02.820 { 00:31:02.821 "subsystem": "bdev", 00:31:02.821 "config": [ 00:31:02.821 { 00:31:02.821 "params": { 00:31:02.821 "io_mechanism": "libaio", 00:31:02.821 "conserve_cpu": false, 00:31:02.821 "filename": "/dev/nvme0n1", 00:31:02.821 "name": "xnvme_bdev" 00:31:02.821 }, 00:31:02.821 "method": "bdev_xnvme_create" 00:31:02.821 }, 00:31:02.821 { 00:31:02.821 "method": "bdev_wait_for_examine" 00:31:02.821 } 00:31:02.821 ] 00:31:02.821 } 00:31:02.821 ] 00:31:02.821 } 00:31:02.821 [2024-12-06 06:57:35.089979] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:02.821 [2024-12-06 06:57:35.090156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70551 ] 00:31:02.821 [2024-12-06 06:57:35.273216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.821 [2024-12-06 06:57:35.378652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.385 Running I/O for 5 seconds... 00:31:05.284 23101.00 IOPS, 90.24 MiB/s [2024-12-06T06:57:38.810Z] 23053.50 IOPS, 90.05 MiB/s [2024-12-06T06:57:40.184Z] 23490.33 IOPS, 91.76 MiB/s [2024-12-06T06:57:41.119Z] 23818.00 IOPS, 93.04 MiB/s [2024-12-06T06:57:41.119Z] 23791.20 IOPS, 92.93 MiB/s 00:31:08.528 Latency(us) 00:31:08.528 [2024-12-06T06:57:41.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.528 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:31:08.528 xnvme_bdev : 5.01 23770.72 92.85 0.00 0.00 2685.08 366.78 6851.49 00:31:08.528 [2024-12-06T06:57:41.119Z] =================================================================================================================== 00:31:08.528 [2024-12-06T06:57:41.119Z] Total : 23770.72 92.85 0.00 0.00 2685.08 366.78 6851.49 00:31:09.464 ************************************ 00:31:09.464 END TEST xnvme_bdevperf 00:31:09.464 ************************************ 00:31:09.464 00:31:09.464 real 0m13.749s 00:31:09.464 user 0m5.456s 00:31:09.464 sys 0m5.794s 00:31:09.464 06:57:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.464 06:57:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.464 06:57:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:31:09.464 06:57:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:09.464 06:57:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.464 06:57:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:09.464 ************************************ 00:31:09.464 START TEST xnvme_fio_plugin 00:31:09.464 ************************************ 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:09.464 06:57:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:09.464 { 00:31:09.464 "subsystems": [ 00:31:09.464 { 00:31:09.464 "subsystem": "bdev", 00:31:09.464 "config": [ 00:31:09.464 { 00:31:09.464 "params": { 00:31:09.464 "io_mechanism": "libaio", 00:31:09.464 "conserve_cpu": false, 00:31:09.464 "filename": "/dev/nvme0n1", 00:31:09.464 "name": "xnvme_bdev" 00:31:09.464 }, 00:31:09.464 "method": "bdev_xnvme_create" 00:31:09.464 }, 00:31:09.464 { 00:31:09.464 "method": "bdev_wait_for_examine" 00:31:09.464 } 00:31:09.464 ] 00:31:09.464 } 00:31:09.464 ] 00:31:09.464 } 00:31:09.723 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:09.723 fio-3.35 00:31:09.723 Starting 1 thread 00:31:16.291 00:31:16.291 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70675: Fri Dec 6 06:57:47 2024 00:31:16.291 read: IOPS=24.0k, BW=93.9MiB/s (98.5MB/s)(470MiB/5001msec) 00:31:16.291 slat (usec): min=5, max=831, avg=37.15, stdev=29.85 00:31:16.291 clat (usec): min=81, max=5664, avg=1465.27, stdev=800.09 00:31:16.291 lat (usec): min=116, max=5790, avg=1502.42, stdev=802.44 00:31:16.291 clat percentiles (usec): 00:31:16.291 | 1.00th=[ 249], 5.00th=[ 371], 10.00th=[ 494], 20.00th=[ 725], 00:31:16.291 | 30.00th=[ 930], 40.00th=[ 1139], 50.00th=[ 1352], 60.00th=[ 1598], 00:31:16.291 | 70.00th=[ 1860], 80.00th=[ 2180], 90.00th=[ 2573], 95.00th=[ 2868], 00:31:16.291 | 99.00th=[ 3687], 99.50th=[ 4015], 99.90th=[ 4555], 99.95th=[ 4752], 00:31:16.291 | 99.99th=[ 5080] 00:31:16.291 bw ( KiB/s): min=89984, max=107760, per=100.00%, avg=97178.67, stdev=6998.61, samples=9 00:31:16.291 iops : min=22496, max=26940, avg=24294.67, stdev=1749.65, samples=9 00:31:16.291 lat (usec) : 100=0.01%, 250=1.05%, 500=9.15%, 750=11.09%, 1000=12.07% 00:31:16.291 lat (msec) : 2=41.28%, 4=24.85%, 10=0.50% 00:31:16.291 cpu : usr=24.10%, sys=53.88%, ctx=72, majf=0, minf=764 00:31:16.291 IO depths : 1=0.1%, 2=1.7%, 4=5.3%, 8=11.9%, 16=25.5%, 32=53.8%, >=64=1.7% 00:31:16.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.291 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:31:16.291 issued rwts: total=120226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:16.291 00:31:16.291 Run status group 0 (all jobs): 00:31:16.291 READ: bw=93.9MiB/s (98.5MB/s), 93.9MiB/s-93.9MiB/s (98.5MB/s-98.5MB/s), io=470MiB (492MB), run=5001-5001msec 00:31:16.865 ----------------------------------------------------- 00:31:16.865 Suppressions used: 00:31:16.865 count bytes template 00:31:16.865 1 11 /usr/src/fio/parse.c 00:31:16.865 1 8 libtcmalloc_minimal.so 00:31:16.865 1 904 libcrypto.so 00:31:16.865 ----------------------------------------------------- 00:31:16.865 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:16.865 06:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:16.865 { 00:31:16.865 "subsystems": [ 00:31:16.865 { 00:31:16.865 "subsystem": "bdev", 00:31:16.865 "config": [ 00:31:16.865 { 00:31:16.865 "params": { 00:31:16.865 "io_mechanism": "libaio", 00:31:16.865 "conserve_cpu": false, 00:31:16.865 "filename": "/dev/nvme0n1", 00:31:16.865 "name": "xnvme_bdev" 00:31:16.865 }, 00:31:16.865 "method": "bdev_xnvme_create" 00:31:16.865 }, 00:31:16.865 { 00:31:16.865 "method": "bdev_wait_for_examine" 00:31:16.865 } 00:31:16.865 ] 00:31:16.865 } 00:31:16.865 ] 00:31:16.865 } 00:31:17.123 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:17.123 fio-3.35 00:31:17.123 Starting 1 thread 00:31:23.780 00:31:23.780 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70772: Fri Dec 6 06:57:55 2024 00:31:23.780 write: IOPS=25.7k, BW=100MiB/s (105MB/s)(501MiB/5001msec); 0 zone resets 00:31:23.780 slat (usec): min=5, max=1159, avg=34.79, stdev=27.87 00:31:23.780 clat (usec): min=74, max=5529, avg=1376.14, stdev=751.99 00:31:23.780 lat (usec): min=105, max=5590, avg=1410.93, stdev=754.32 00:31:23.780 clat percentiles (usec): 00:31:23.781 | 1.00th=[ 237], 5.00th=[ 351], 10.00th=[ 461], 20.00th=[ 676], 00:31:23.781 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1500], 00:31:23.781 | 70.00th=[ 1745], 80.00th=[ 2040], 90.00th=[ 2409], 95.00th=[ 2671], 00:31:23.781 | 99.00th=[ 3425], 99.50th=[ 3785], 99.90th=[ 4359], 99.95th=[ 4621], 00:31:23.781 | 99.99th=[ 4948] 00:31:23.781 bw ( KiB/s): min=92208, max=117176, per=100.00%, avg=102866.67, stdev=8364.12, samples=9 00:31:23.781 iops : min=23052, max=29294, avg=25716.67, stdev=2091.03, samples=9 00:31:23.781 lat (usec) : 100=0.01%, 250=1.33%, 500=10.31%, 750=12.27%, 1000=12.47% 00:31:23.781 lat (msec) : 2=42.50%, 4=20.82%, 10=0.31% 00:31:23.781 cpu : usr=24.90%, sys=53.92%, ctx=104, majf=0, minf=765 00:31:23.781 IO depths : 1=0.1%, 2=1.6%, 4=5.3%, 8=12.1%, 16=25.8%, 32=53.4%, >=64=1.7% 00:31:23.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.781 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:31:23.781 issued rwts: total=0,128311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.781 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:23.781 00:31:23.781 Run status group 0 (all jobs): 00:31:23.781 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=501MiB (526MB), run=5001-5001msec 00:31:24.040 ----------------------------------------------------- 00:31:24.040 Suppressions used: 00:31:24.040 count bytes template 00:31:24.041 1 11 /usr/src/fio/parse.c 00:31:24.041 1 8 libtcmalloc_minimal.so 00:31:24.041 1 904 libcrypto.so 00:31:24.041 ----------------------------------------------------- 00:31:24.041 00:31:24.041 ************************************ 00:31:24.041 END TEST xnvme_fio_plugin 00:31:24.041 ************************************ 00:31:24.041 00:31:24.041 real 0m14.651s 00:31:24.041 user 0m6.165s 00:31:24.041 sys 0m5.983s 00:31:24.041 06:57:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.041 06:57:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:24.041 06:57:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:31:24.041 06:57:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:31:24.041 06:57:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:31:24.041 06:57:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:31:24.041 06:57:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:24.041 06:57:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.041 06:57:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:24.041 ************************************ 00:31:24.041 START TEST xnvme_rpc 00:31:24.041 ************************************ 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70853 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70853 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70853 ']' 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.041 06:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:24.311 [2024-12-06 06:57:56.701030] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:24.311 [2024-12-06 06:57:56.701171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:31:24.311 [2024-12-06 06:57:56.879015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.574 [2024-12-06 06:57:56.981810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 xnvme_bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70853 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70853 ']' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70853 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70853 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.505 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.506 killing process with pid 70853 00:31:25.506 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70853' 00:31:25.506 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70853 00:31:25.506 06:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70853 00:31:28.037 ************************************ 00:31:28.037 END TEST xnvme_rpc 00:31:28.037 ************************************ 00:31:28.037 00:31:28.037 real 0m3.512s 00:31:28.037 user 0m3.820s 00:31:28.037 sys 0m0.406s 00:31:28.037 06:58:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.037 06:58:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:28.037 06:58:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:31:28.037 06:58:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:28.037 06:58:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.037 06:58:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:28.037 ************************************ 00:31:28.037 START TEST xnvme_bdevperf 00:31:28.037 ************************************ 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:28.037 06:58:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:28.037 { 00:31:28.038 "subsystems": [ 00:31:28.038 { 00:31:28.038 "subsystem": "bdev", 00:31:28.038 "config": [ 00:31:28.038 { 00:31:28.038 "params": { 00:31:28.038 "io_mechanism": "libaio", 00:31:28.038 "conserve_cpu": true, 00:31:28.038 "filename": "/dev/nvme0n1", 00:31:28.038 "name": "xnvme_bdev" 00:31:28.038 }, 00:31:28.038 "method": "bdev_xnvme_create" 00:31:28.038 }, 00:31:28.038 { 00:31:28.038 "method": "bdev_wait_for_examine" 00:31:28.038 } 00:31:28.038 ] 00:31:28.038 } 00:31:28.038 ] 00:31:28.038 } 00:31:28.038 [2024-12-06 06:58:00.244591] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:28.038 [2024-12-06 06:58:00.244747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:31:28.038 [2024-12-06 06:58:00.414386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.038 [2024-12-06 06:58:00.516490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.296 Running I/O for 5 seconds... 00:31:30.623 27543.00 IOPS, 107.59 MiB/s [2024-12-06T06:58:04.152Z] 27777.50 IOPS, 108.51 MiB/s [2024-12-06T06:58:05.093Z] 26807.67 IOPS, 104.72 MiB/s [2024-12-06T06:58:06.056Z] 26345.75 IOPS, 102.91 MiB/s [2024-12-06T06:58:06.056Z] 26164.60 IOPS, 102.21 MiB/s 00:31:33.465 Latency(us) 00:31:33.465 [2024-12-06T06:58:06.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.465 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:31:33.465 xnvme_bdev : 5.01 26149.03 102.14 0.00 0.00 2441.54 577.16 5659.93 00:31:33.465 [2024-12-06T06:58:06.056Z] =================================================================================================================== 00:31:33.465 [2024-12-06T06:58:06.056Z] Total : 26149.03 102.14 0.00 0.00 2441.54 577.16 5659.93 00:31:34.402 06:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:34.402 06:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:31:34.402 06:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:34.402 06:58:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:34.402 06:58:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:34.402 { 00:31:34.402 "subsystems": [ 00:31:34.402 { 00:31:34.402 "subsystem": "bdev", 00:31:34.402 "config": [ 00:31:34.402 { 00:31:34.402 "params": { 00:31:34.402 "io_mechanism": "libaio", 00:31:34.402 "conserve_cpu": true, 00:31:34.402 "filename": "/dev/nvme0n1", 00:31:34.402 "name": "xnvme_bdev" 00:31:34.402 }, 00:31:34.402 "method": "bdev_xnvme_create" 00:31:34.402 }, 00:31:34.402 { 00:31:34.402 "method": "bdev_wait_for_examine" 00:31:34.402 } 00:31:34.402 ] 00:31:34.402 } 00:31:34.402 ] 00:31:34.402 } 00:31:34.661 [2024-12-06 06:58:06.997571] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:34.661 [2024-12-06 06:58:06.997766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71008 ] 00:31:34.661 [2024-12-06 06:58:07.180800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.919 [2024-12-06 06:58:07.279089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.178 Running I/O for 5 seconds... 00:31:37.046 24776.00 IOPS, 96.78 MiB/s [2024-12-06T06:58:11.036Z] 26215.00 IOPS, 102.40 MiB/s [2024-12-06T06:58:11.968Z] 25630.00 IOPS, 100.12 MiB/s [2024-12-06T06:58:12.901Z] 25052.25 IOPS, 97.86 MiB/s 00:31:40.310 Latency(us) 00:31:40.310 [2024-12-06T06:58:12.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.310 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:31:40.310 xnvme_bdev : 5.01 24698.91 96.48 0.00 0.00 2584.42 269.96 5838.66 00:31:40.310 [2024-12-06T06:58:12.901Z] =================================================================================================================== 00:31:40.310 [2024-12-06T06:58:12.901Z] Total : 24698.91 96.48 0.00 0.00 2584.42 269.96 5838.66 00:31:41.248 00:31:41.248 real 0m13.511s 00:31:41.248 user 0m5.258s 00:31:41.248 sys 0m5.739s 00:31:41.248 06:58:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.248 06:58:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.248 ************************************ 00:31:41.248 END TEST xnvme_bdevperf 00:31:41.248 ************************************ 00:31:41.248 06:58:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:31:41.248 06:58:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:41.248 06:58:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.248 06:58:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:41.248 ************************************ 00:31:41.248 START TEST xnvme_fio_plugin 00:31:41.248 ************************************ 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:41.248 06:58:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:41.248 { 00:31:41.248 "subsystems": [ 00:31:41.248 { 00:31:41.248 "subsystem": "bdev", 00:31:41.248 "config": [ 00:31:41.248 { 00:31:41.248 "params": { 00:31:41.248 "io_mechanism": "libaio", 00:31:41.248 "conserve_cpu": true, 00:31:41.248 "filename": "/dev/nvme0n1", 00:31:41.248 "name": "xnvme_bdev" 00:31:41.248 }, 00:31:41.248 "method": "bdev_xnvme_create" 00:31:41.248 }, 00:31:41.248 { 00:31:41.248 "method": "bdev_wait_for_examine" 00:31:41.248 } 00:31:41.248 ] 00:31:41.248 } 00:31:41.248 ] 00:31:41.248 } 00:31:41.507 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:41.507 fio-3.35 00:31:41.507 Starting 1 thread 00:31:48.069 00:31:48.069 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71133: Fri Dec 6 06:58:19 2024 00:31:48.069 read: IOPS=23.0k, BW=89.8MiB/s (94.2MB/s)(449MiB/5001msec) 00:31:48.069 slat (usec): min=5, max=2019, avg=39.08, stdev=27.35 00:31:48.069 clat (usec): min=109, max=6624, avg=1510.77, stdev=816.20 00:31:48.069 lat (usec): min=180, max=6679, avg=1549.85, stdev=818.24 00:31:48.069 clat percentiles (usec): 00:31:48.069 | 1.00th=[ 239], 5.00th=[ 359], 10.00th=[ 486], 20.00th=[ 734], 00:31:48.069 | 30.00th=[ 963], 40.00th=[ 1188], 50.00th=[ 1418], 60.00th=[ 1680], 00:31:48.069 | 70.00th=[ 1958], 80.00th=[ 2278], 90.00th=[ 2606], 95.00th=[ 2900], 00:31:48.069 | 99.00th=[ 3589], 99.50th=[ 3949], 99.90th=[ 4555], 99.95th=[ 4883], 00:31:48.069 | 99.99th=[ 5669] 00:31:48.069 bw ( KiB/s): min=81928, max=101208, per=100.00%, avg=92181.33, stdev=5925.22, samples=9 00:31:48.069 iops : min=20482, max=25302, avg=23045.33, stdev=1481.31, samples=9 00:31:48.069 lat (usec) : 250=1.36%, 500=9.19%, 750=10.32%, 1000=10.92% 00:31:48.069 lat (msec) : 2=39.58%, 4=28.20%, 10=0.43% 00:31:48.069 cpu : usr=23.56%, sys=53.90%, ctx=158, majf=0, minf=636 00:31:48.069 IO depths : 1=0.1%, 2=1.9%, 4=5.7%, 8=12.2%, 16=25.5%, 32=52.9%, >=64=1.7% 00:31:48.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.069 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:31:48.069 issued rwts: total=114989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.069 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:48.069 00:31:48.069 Run status group 0 (all jobs): 00:31:48.070 READ: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=449MiB (471MB), run=5001-5001msec 00:31:48.636 ----------------------------------------------------- 00:31:48.636 Suppressions used: 00:31:48.636 count bytes template 00:31:48.636 1 11 /usr/src/fio/parse.c 00:31:48.636 1 8 libtcmalloc_minimal.so 00:31:48.636 1 904 libcrypto.so 00:31:48.636 ----------------------------------------------------- 00:31:48.636 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:48.636 06:58:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:48.636 { 00:31:48.636 "subsystems": [ 00:31:48.636 { 00:31:48.636 "subsystem": "bdev", 00:31:48.636 "config": [ 00:31:48.636 { 00:31:48.636 "params": { 00:31:48.636 "io_mechanism": "libaio", 00:31:48.636 "conserve_cpu": true, 00:31:48.636 "filename": "/dev/nvme0n1", 00:31:48.636 "name": "xnvme_bdev" 00:31:48.636 }, 00:31:48.636 "method": "bdev_xnvme_create" 00:31:48.636 }, 00:31:48.636 { 00:31:48.636 "method": "bdev_wait_for_examine" 00:31:48.636 } 00:31:48.636 ] 00:31:48.636 } 00:31:48.636 ] 00:31:48.636 } 00:31:48.895 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:48.895 fio-3.35 00:31:48.895 Starting 1 thread 00:31:55.456 00:31:55.456 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71230: Fri Dec 6 06:58:27 2024 00:31:55.456 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(454MiB/5001msec); 0 zone resets 00:31:55.456 slat (usec): min=5, max=648, avg=38.64, stdev=27.42 00:31:55.456 clat (usec): min=122, max=5116, avg=1503.74, stdev=798.99 00:31:55.456 lat (usec): min=184, max=5221, avg=1542.38, stdev=800.59 00:31:55.456 clat percentiles (usec): 00:31:55.456 | 1.00th=[ 247], 5.00th=[ 367], 10.00th=[ 494], 20.00th=[ 725], 00:31:55.456 | 30.00th=[ 955], 40.00th=[ 1188], 50.00th=[ 1418], 60.00th=[ 1680], 00:31:55.456 | 70.00th=[ 1958], 80.00th=[ 2245], 90.00th=[ 2606], 95.00th=[ 2835], 00:31:55.456 | 99.00th=[ 3458], 99.50th=[ 3785], 99.90th=[ 4359], 99.95th=[ 4555], 00:31:55.456 | 99.99th=[ 4817] 00:31:55.456 bw ( KiB/s): min=85229, max=102000, per=100.00%, avg=93052.11, stdev=4658.14, samples=9 00:31:55.456 iops : min=21307, max=25500, avg=23263.00, stdev=1164.59, samples=9 00:31:55.456 lat (usec) : 250=1.06%, 500=9.18%, 750=10.72%, 1000=11.05% 00:31:55.456 lat (msec) : 2=39.29%, 4=28.42%, 10=0.29% 00:31:55.456 cpu : usr=24.52%, sys=53.02%, ctx=150, majf=0, minf=649 00:31:55.456 IO depths : 1=0.1%, 2=1.8%, 4=5.7%, 8=12.3%, 16=25.7%, 32=52.8%, >=64=1.7% 00:31:55.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.456 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:31:55.456 issued rwts: total=0,116118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.456 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:55.456 00:31:55.456 Run status group 0 (all jobs): 00:31:55.456 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=454MiB (476MB), run=5001-5001msec 00:31:56.023 ----------------------------------------------------- 00:31:56.023 Suppressions used: 00:31:56.023 count bytes template 00:31:56.023 1 11 /usr/src/fio/parse.c 00:31:56.023 1 8 libtcmalloc_minimal.so 00:31:56.023 1 904 libcrypto.so 00:31:56.023 ----------------------------------------------------- 00:31:56.023 00:31:56.023 ************************************ 00:31:56.023 END TEST xnvme_fio_plugin 00:31:56.023 ************************************ 00:31:56.023 00:31:56.023 real 0m14.667s 00:31:56.023 user 0m6.093s 00:31:56.023 sys 0m5.965s 00:31:56.023 06:58:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.023 06:58:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:31:56.023 06:58:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:31:56.023 06:58:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:56.023 06:58:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.023 06:58:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:56.023 ************************************ 00:31:56.023 START TEST xnvme_rpc 00:31:56.023 ************************************ 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:31:56.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71312 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71312 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71312 ']' 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.023 06:58:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:56.023 [2024-12-06 06:58:28.539389] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:56.023 [2024-12-06 06:58:28.539530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71312 ] 00:31:56.303 [2024-12-06 06:58:28.711444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.303 [2024-12-06 06:58:28.817118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 xnvme_bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71312 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71312 ']' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71312 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.248 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71312 00:31:57.505 killing process with pid 71312 00:31:57.505 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.505 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.505 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71312' 00:31:57.505 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71312 00:31:57.505 06:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71312 00:31:59.403 00:31:59.403 real 0m3.498s 00:31:59.403 user 0m3.829s 00:31:59.403 sys 0m0.442s 00:31:59.403 06:58:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.403 06:58:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:59.403 ************************************ 00:31:59.403 END TEST xnvme_rpc 00:31:59.403 ************************************ 00:31:59.403 06:58:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:31:59.403 06:58:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:59.403 06:58:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.403 06:58:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:59.403 ************************************ 00:31:59.403 START TEST xnvme_bdevperf 00:31:59.403 ************************************ 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:59.403 06:58:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:59.661 { 00:31:59.661 "subsystems": [ 00:31:59.661 { 00:31:59.661 "subsystem": "bdev", 00:31:59.661 "config": [ 00:31:59.661 { 00:31:59.661 "params": { 00:31:59.661 "io_mechanism": "io_uring", 00:31:59.661 "conserve_cpu": false, 00:31:59.661 "filename": "/dev/nvme0n1", 00:31:59.661 "name": "xnvme_bdev" 00:31:59.661 }, 00:31:59.661 "method": "bdev_xnvme_create" 00:31:59.661 }, 00:31:59.661 { 00:31:59.661 "method": "bdev_wait_for_examine" 00:31:59.661 } 00:31:59.661 ] 00:31:59.661 } 00:31:59.661 ] 00:31:59.661 } 00:31:59.661 [2024-12-06 06:58:32.070104] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:31:59.661 [2024-12-06 06:58:32.070248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:31:59.661 [2024-12-06 06:58:32.240572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.919 [2024-12-06 06:58:32.343250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.176 Running I/O for 5 seconds... 00:32:02.484 50169.00 IOPS, 195.97 MiB/s [2024-12-06T06:58:36.012Z] 49852.50 IOPS, 194.74 MiB/s [2024-12-06T06:58:36.947Z] 49000.33 IOPS, 191.41 MiB/s [2024-12-06T06:58:37.882Z] 48974.25 IOPS, 191.31 MiB/s [2024-12-06T06:58:37.882Z] 48059.20 IOPS, 187.73 MiB/s 00:32:05.291 Latency(us) 00:32:05.291 [2024-12-06T06:58:37.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.291 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:32:05.291 xnvme_bdev : 5.00 48039.85 187.66 0.00 0.00 1327.92 312.79 6404.65 00:32:05.291 [2024-12-06T06:58:37.882Z] =================================================================================================================== 00:32:05.291 [2024-12-06T06:58:37.882Z] Total : 48039.85 187.66 0.00 0.00 1327.92 312.79 6404.65 00:32:06.227 06:58:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:06.227 06:58:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:32:06.227 06:58:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:06.227 06:58:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:06.227 06:58:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.227 { 00:32:06.227 "subsystems": [ 00:32:06.227 { 00:32:06.227 "subsystem": "bdev", 00:32:06.227 "config": [ 00:32:06.227 { 00:32:06.227 "params": { 00:32:06.227 "io_mechanism": "io_uring", 00:32:06.227 "conserve_cpu": false, 00:32:06.227 "filename": "/dev/nvme0n1", 00:32:06.227 "name": "xnvme_bdev" 00:32:06.227 }, 00:32:06.227 "method": "bdev_xnvme_create" 00:32:06.227 }, 00:32:06.227 { 00:32:06.227 "method": "bdev_wait_for_examine" 00:32:06.227 } 00:32:06.227 ] 00:32:06.227 } 00:32:06.227 ] 00:32:06.227 } 00:32:06.227 [2024-12-06 06:58:38.804919] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:32:06.227 [2024-12-06 06:58:38.805265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71468 ] 00:32:06.486 [2024-12-06 06:58:38.983675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.745 [2024-12-06 06:58:39.092010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.002 Running I/O for 5 seconds... 00:32:08.875 49536.00 IOPS, 193.50 MiB/s [2024-12-06T06:58:42.839Z] 47970.50 IOPS, 187.38 MiB/s [2024-12-06T06:58:43.771Z] 41345.67 IOPS, 161.51 MiB/s [2024-12-06T06:58:44.704Z] 42273.25 IOPS, 165.13 MiB/s [2024-12-06T06:58:44.704Z] 43597.80 IOPS, 170.30 MiB/s 00:32:12.113 Latency(us) 00:32:12.113 [2024-12-06T06:58:44.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.113 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:32:12.114 xnvme_bdev : 5.00 43583.85 170.25 0.00 0.00 1462.67 729.83 10068.71 00:32:12.114 [2024-12-06T06:58:44.705Z] =================================================================================================================== 00:32:12.114 [2024-12-06T06:58:44.705Z] Total : 43583.85 170.25 0.00 0.00 1462.67 729.83 10068.71 00:32:13.069 00:32:13.070 real 0m13.470s 00:32:13.070 user 0m7.290s 00:32:13.070 sys 0m5.982s 00:32:13.070 06:58:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.070 ************************************ 00:32:13.070 END TEST xnvme_bdevperf 00:32:13.070 ************************************ 00:32:13.070 06:58:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:13.070 06:58:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:32:13.070 06:58:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.070 06:58:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.070 06:58:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:13.070 ************************************ 00:32:13.070 START TEST xnvme_fio_plugin 00:32:13.070 ************************************ 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:13.070 06:58:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:13.070 { 00:32:13.070 "subsystems": [ 00:32:13.070 { 00:32:13.070 "subsystem": "bdev", 00:32:13.070 "config": [ 00:32:13.070 { 00:32:13.070 "params": { 00:32:13.070 "io_mechanism": "io_uring", 00:32:13.070 "conserve_cpu": false, 00:32:13.070 "filename": "/dev/nvme0n1", 00:32:13.070 "name": "xnvme_bdev" 00:32:13.070 }, 00:32:13.070 "method": "bdev_xnvme_create" 00:32:13.070 }, 00:32:13.070 { 00:32:13.070 "method": "bdev_wait_for_examine" 00:32:13.070 } 00:32:13.070 ] 00:32:13.070 } 00:32:13.070 ] 00:32:13.070 } 00:32:13.327 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:13.327 fio-3.35 00:32:13.327 Starting 1 thread 00:32:19.887 00:32:19.887 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71596: Fri Dec 6 06:58:51 2024 00:32:19.887 read: IOPS=51.2k, BW=200MiB/s (210MB/s)(1001MiB/5001msec) 00:32:19.887 slat (usec): min=3, max=104, avg= 4.26, stdev= 2.65 00:32:19.887 clat (usec): min=326, max=8121, avg=1080.74, stdev=248.38 00:32:19.887 lat (usec): min=329, max=8124, avg=1085.00, stdev=250.02 00:32:19.887 clat percentiles (usec): 00:32:19.887 | 1.00th=[ 807], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 922], 00:32:19.887 | 30.00th=[ 955], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1057], 00:32:19.887 | 70.00th=[ 1106], 80.00th=[ 1172], 90.00th=[ 1319], 95.00th=[ 1565], 00:32:19.887 | 99.00th=[ 2114], 99.50th=[ 2245], 99.90th=[ 2507], 99.95th=[ 2638], 00:32:19.887 | 99.99th=[ 5014] 00:32:19.887 bw ( KiB/s): min=193760, max=226816, per=100.00%, avg=205905.78, stdev=10386.10, samples=9 00:32:19.888 iops : min=48440, max=56704, avg=51476.44, stdev=2596.52, samples=9 00:32:19.888 lat (usec) : 500=0.02%, 750=0.12%, 1000=42.45% 00:32:19.888 lat (msec) : 2=55.94%, 4=1.44%, 10=0.02% 00:32:19.888 cpu : usr=41.22%, sys=57.78%, ctx=23, majf=0, minf=762 00:32:19.888 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:32:19.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.888 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:32:19.888 issued rwts: total=256199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:19.888 00:32:19.888 Run status group 0 (all jobs): 00:32:19.888 READ: bw=200MiB/s (210MB/s), 200MiB/s-200MiB/s (210MB/s-210MB/s), io=1001MiB (1049MB), run=5001-5001msec 00:32:20.455 ----------------------------------------------------- 00:32:20.455 Suppressions used: 00:32:20.455 count bytes template 00:32:20.455 1 11 /usr/src/fio/parse.c 00:32:20.455 1 8 libtcmalloc_minimal.so 00:32:20.455 1 904 libcrypto.so 00:32:20.455 ----------------------------------------------------- 00:32:20.455 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:20.455 06:58:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:20.455 06:58:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:20.455 06:58:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:20.455 06:58:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:20.455 06:58:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:20.455 06:58:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:20.714 { 00:32:20.714 "subsystems": [ 00:32:20.715 { 00:32:20.715 "subsystem": "bdev", 00:32:20.715 "config": [ 00:32:20.715 { 00:32:20.715 "params": { 00:32:20.715 "io_mechanism": "io_uring", 00:32:20.715 "conserve_cpu": false, 00:32:20.715 "filename": "/dev/nvme0n1", 00:32:20.715 "name": "xnvme_bdev" 00:32:20.715 }, 00:32:20.715 "method": "bdev_xnvme_create" 00:32:20.715 }, 00:32:20.715 { 00:32:20.715 "method": "bdev_wait_for_examine" 00:32:20.715 } 00:32:20.715 ] 00:32:20.715 } 00:32:20.715 ] 00:32:20.715 } 00:32:20.715 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:20.715 fio-3.35 00:32:20.715 Starting 1 thread 00:32:27.285 00:32:27.285 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71688: Fri Dec 6 06:58:58 2024 00:32:27.285 write: IOPS=48.6k, BW=190MiB/s (199MB/s)(949MiB/5001msec); 0 zone resets 00:32:27.285 slat (usec): min=3, max=121, avg= 4.31, stdev= 1.65 00:32:27.285 clat (usec): min=783, max=2409, avg=1146.71, stdev=160.29 00:32:27.285 lat (usec): min=786, max=2419, avg=1151.03, stdev=160.87 00:32:27.285 clat percentiles (usec): 00:32:27.285 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1020], 00:32:27.285 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:32:27.285 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[ 1434], 00:32:27.285 | 99.00th=[ 1696], 99.50th=[ 1778], 99.90th=[ 2024], 99.95th=[ 2114], 00:32:27.285 | 99.99th=[ 2311] 00:32:27.285 bw ( KiB/s): min=177664, max=207360, per=100.00%, avg=195356.44, stdev=10345.41, samples=9 00:32:27.285 iops : min=44416, max=51840, avg=48839.11, stdev=2586.35, samples=9 00:32:27.285 lat (usec) : 1000=15.47% 00:32:27.285 lat (msec) : 2=84.42%, 4=0.11% 00:32:27.285 cpu : usr=40.82%, sys=58.22%, ctx=9, majf=0, minf=763 00:32:27.285 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:32:27.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.285 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:32:27.285 issued rwts: total=0,243008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:27.285 00:32:27.285 Run status group 0 (all jobs): 00:32:27.285 WRITE: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=949MiB (995MB), run=5001-5001msec 00:32:27.851 ----------------------------------------------------- 00:32:27.851 Suppressions used: 00:32:27.851 count bytes template 00:32:27.851 1 11 /usr/src/fio/parse.c 00:32:27.851 1 8 libtcmalloc_minimal.so 00:32:27.851 1 904 libcrypto.so 00:32:27.851 ----------------------------------------------------- 00:32:27.851 00:32:27.851 ************************************ 00:32:27.851 END TEST xnvme_fio_plugin 00:32:27.851 ************************************ 00:32:27.851 00:32:27.851 real 0m14.812s 00:32:27.851 user 0m7.982s 00:32:27.851 sys 0m6.429s 00:32:27.851 06:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.851 06:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:27.851 06:59:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:32:27.851 06:59:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:32:27.851 06:59:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:32:27.851 06:59:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:32:27.851 06:59:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.851 06:59:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.851 06:59:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:27.851 ************************************ 00:32:27.851 START TEST xnvme_rpc 00:32:27.851 ************************************ 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71780 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:27.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71780 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71780 ']' 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.851 06:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:28.109 [2024-12-06 06:59:00.477924] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:32:28.109 [2024-12-06 06:59:00.478102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71780 ] 00:32:28.109 [2024-12-06 06:59:00.664800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.368 [2024-12-06 06:59:00.788132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 xnvme_bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71780 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71780 ']' 00:32:29.304 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71780 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71780 00:32:29.305 killing process with pid 71780 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71780' 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71780 00:32:29.305 06:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71780 00:32:31.845 00:32:31.845 real 0m3.506s 00:32:31.845 user 0m3.868s 00:32:31.845 sys 0m0.430s 00:32:31.845 06:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.845 06:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:31.845 ************************************ 00:32:31.845 END TEST xnvme_rpc 00:32:31.845 ************************************ 00:32:31.845 06:59:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:32:31.845 06:59:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:31.845 06:59:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.845 06:59:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:31.845 ************************************ 00:32:31.845 START TEST xnvme_bdevperf 00:32:31.845 ************************************ 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:32:31.845 06:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:31.846 06:59:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:31.846 06:59:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.846 { 00:32:31.846 "subsystems": [ 00:32:31.846 { 00:32:31.846 "subsystem": "bdev", 00:32:31.846 "config": [ 00:32:31.846 { 00:32:31.846 "params": { 00:32:31.846 "io_mechanism": "io_uring", 00:32:31.846 "conserve_cpu": true, 00:32:31.846 "filename": "/dev/nvme0n1", 00:32:31.846 "name": "xnvme_bdev" 00:32:31.846 }, 00:32:31.846 "method": "bdev_xnvme_create" 00:32:31.846 }, 00:32:31.846 { 00:32:31.846 "method": "bdev_wait_for_examine" 00:32:31.846 } 00:32:31.846 ] 00:32:31.846 } 00:32:31.846 ] 00:32:31.846 } 00:32:31.846 [2024-12-06 06:59:04.012671] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:32:31.846 [2024-12-06 06:59:04.012878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71854 ] 00:32:31.846 [2024-12-06 06:59:04.196436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.846 [2024-12-06 06:59:04.302719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.104 Running I/O for 5 seconds... 00:32:34.414 47231.00 IOPS, 184.50 MiB/s [2024-12-06T06:59:07.937Z] 46399.50 IOPS, 181.25 MiB/s [2024-12-06T06:59:08.871Z] 46996.67 IOPS, 183.58 MiB/s [2024-12-06T06:59:09.805Z] 48159.50 IOPS, 188.12 MiB/s [2024-12-06T06:59:09.805Z] 48998.00 IOPS, 191.40 MiB/s 00:32:37.214 Latency(us) 00:32:37.214 [2024-12-06T06:59:09.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.214 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:32:37.214 xnvme_bdev : 5.00 48983.00 191.34 0.00 0.00 1302.33 834.09 4647.10 00:32:37.214 [2024-12-06T06:59:09.805Z] =================================================================================================================== 00:32:37.214 [2024-12-06T06:59:09.805Z] Total : 48983.00 191.34 0.00 0.00 1302.33 834.09 4647.10 00:32:38.150 06:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:38.150 06:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:32:38.150 06:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:38.150 06:59:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:38.150 06:59:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.150 { 00:32:38.150 "subsystems": [ 00:32:38.150 { 00:32:38.150 "subsystem": "bdev", 00:32:38.150 "config": [ 00:32:38.150 { 00:32:38.150 "params": { 00:32:38.150 "io_mechanism": "io_uring", 00:32:38.150 "conserve_cpu": true, 00:32:38.150 "filename": "/dev/nvme0n1", 00:32:38.150 "name": "xnvme_bdev" 00:32:38.150 }, 00:32:38.150 "method": "bdev_xnvme_create" 00:32:38.150 }, 00:32:38.150 { 00:32:38.150 "method": "bdev_wait_for_examine" 00:32:38.150 } 00:32:38.150 ] 00:32:38.150 } 00:32:38.150 ] 00:32:38.150 } 00:32:38.150 [2024-12-06 06:59:10.722827] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:32:38.150 [2024-12-06 06:59:10.722994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71934 ] 00:32:38.409 [2024-12-06 06:59:10.909691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.667 [2024-12-06 06:59:11.036031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.924 Running I/O for 5 seconds... 00:32:41.232 43008.00 IOPS, 168.00 MiB/s [2024-12-06T06:59:14.390Z] 44512.00 IOPS, 173.88 MiB/s [2024-12-06T06:59:15.763Z] 44992.00 IOPS, 175.75 MiB/s [2024-12-06T06:59:16.694Z] 45168.00 IOPS, 176.44 MiB/s [2024-12-06T06:59:16.694Z] 45324.80 IOPS, 177.05 MiB/s 00:32:44.103 Latency(us) 00:32:44.103 [2024-12-06T06:59:16.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.104 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:32:44.104 xnvme_bdev : 5.01 45287.71 176.91 0.00 0.00 1408.20 826.65 8400.52 00:32:44.104 [2024-12-06T06:59:16.695Z] =================================================================================================================== 00:32:44.104 [2024-12-06T06:59:16.695Z] Total : 45287.71 176.91 0.00 0.00 1408.20 826.65 8400.52 00:32:45.039 00:32:45.039 real 0m13.542s 00:32:45.039 user 0m9.613s 00:32:45.039 sys 0m3.410s 00:32:45.039 06:59:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.040 ************************************ 00:32:45.040 END TEST xnvme_bdevperf 00:32:45.040 ************************************ 00:32:45.040 06:59:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:45.040 06:59:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:32:45.040 06:59:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:45.040 06:59:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.040 06:59:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:45.040 ************************************ 00:32:45.040 START TEST xnvme_fio_plugin 00:32:45.040 ************************************ 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:45.040 06:59:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:45.040 { 00:32:45.040 "subsystems": [ 00:32:45.040 { 00:32:45.040 "subsystem": "bdev", 00:32:45.040 "config": [ 00:32:45.040 { 00:32:45.040 "params": { 00:32:45.040 "io_mechanism": "io_uring", 00:32:45.040 "conserve_cpu": true, 00:32:45.040 "filename": "/dev/nvme0n1", 00:32:45.040 "name": "xnvme_bdev" 00:32:45.040 }, 00:32:45.040 "method": "bdev_xnvme_create" 00:32:45.040 }, 00:32:45.040 { 00:32:45.040 "method": "bdev_wait_for_examine" 00:32:45.040 } 00:32:45.040 ] 00:32:45.040 } 00:32:45.040 ] 00:32:45.040 } 00:32:45.299 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:45.299 fio-3.35 00:32:45.299 Starting 1 thread 00:32:51.861 00:32:51.861 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72050: Fri Dec 6 06:59:23 2024 00:32:51.861 read: IOPS=47.7k, BW=186MiB/s (195MB/s)(932MiB/5001msec) 00:32:51.861 slat (usec): min=2, max=234, avg= 4.22, stdev= 1.95 00:32:51.861 clat (usec): min=163, max=2834, avg=1172.64, stdev=172.33 00:32:51.861 lat (usec): min=167, max=2841, avg=1176.86, stdev=172.94 00:32:51.861 clat percentiles (usec): 00:32:51.861 | 1.00th=[ 889], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:32:51.861 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1188], 00:32:51.861 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1385], 95.00th=[ 1500], 00:32:51.861 | 99.00th=[ 1729], 99.50th=[ 1827], 99.90th=[ 2024], 99.95th=[ 2212], 00:32:51.861 | 99.99th=[ 2737] 00:32:51.861 bw ( KiB/s): min=175104, max=219136, per=99.17%, avg=189212.44, stdev=12892.11, samples=9 00:32:51.861 iops : min=43776, max=54784, avg=47303.11, stdev=3223.03, samples=9 00:32:51.861 lat (usec) : 250=0.01%, 750=0.01%, 1000=12.59% 00:32:51.861 lat (msec) : 2=87.28%, 4=0.12% 00:32:51.861 cpu : usr=62.96%, sys=32.84%, ctx=36, majf=0, minf=762 00:32:51.861 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:32:51.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.861 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:32:51.861 issued rwts: total=238535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.861 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:51.861 00:32:51.861 Run status group 0 (all jobs): 00:32:51.861 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=932MiB (977MB), run=5001-5001msec 00:32:52.427 ----------------------------------------------------- 00:32:52.427 Suppressions used: 00:32:52.428 count bytes template 00:32:52.428 1 11 /usr/src/fio/parse.c 00:32:52.428 1 8 libtcmalloc_minimal.so 00:32:52.428 1 904 libcrypto.so 00:32:52.428 ----------------------------------------------------- 00:32:52.428 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:52.428 06:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:52.428 { 00:32:52.428 "subsystems": [ 00:32:52.428 { 00:32:52.428 "subsystem": "bdev", 00:32:52.428 "config": [ 00:32:52.428 { 00:32:52.428 "params": { 00:32:52.428 "io_mechanism": "io_uring", 00:32:52.428 "conserve_cpu": true, 00:32:52.428 "filename": "/dev/nvme0n1", 00:32:52.428 "name": "xnvme_bdev" 00:32:52.428 }, 00:32:52.428 "method": "bdev_xnvme_create" 00:32:52.428 }, 00:32:52.428 { 00:32:52.428 "method": "bdev_wait_for_examine" 00:32:52.428 } 00:32:52.428 ] 00:32:52.428 } 00:32:52.428 ] 00:32:52.428 } 00:32:52.686 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:52.686 fio-3.35 00:32:52.686 Starting 1 thread 00:32:59.244 00:32:59.244 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72149: Fri Dec 6 06:59:30 2024 00:32:59.244 write: IOPS=45.1k, BW=176MiB/s (185MB/s)(881MiB/5001msec); 0 zone resets 00:32:59.244 slat (nsec): min=2766, max=97940, avg=4590.17, stdev=2186.27 00:32:59.244 clat (usec): min=712, max=2484, avg=1234.76, stdev=161.38 00:32:59.244 lat (usec): min=716, max=2488, avg=1239.35, stdev=161.90 00:32:59.244 clat percentiles (usec): 00:32:59.244 | 1.00th=[ 971], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[ 1106], 00:32:59.244 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1254], 00:32:59.244 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1418], 95.00th=[ 1549], 00:32:59.244 | 99.00th=[ 1795], 99.50th=[ 1876], 99.90th=[ 2040], 99.95th=[ 2114], 00:32:59.244 | 99.99th=[ 2278] 00:32:59.244 bw ( KiB/s): min=175616, max=185344, per=100.00%, avg=180672.89, stdev=3327.85, samples=9 00:32:59.244 iops : min=43904, max=46336, avg=45168.22, stdev=831.96, samples=9 00:32:59.244 lat (usec) : 750=0.01%, 1000=2.35% 00:32:59.244 lat (msec) : 2=97.51%, 4=0.14% 00:32:59.244 cpu : usr=61.88%, sys=34.14%, ctx=15, majf=0, minf=763 00:32:59.244 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:32:59.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.244 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:32:59.244 issued rwts: total=0,225593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:59.244 00:32:59.244 Run status group 0 (all jobs): 00:32:59.244 WRITE: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=881MiB (924MB), run=5001-5001msec 00:32:59.503 ----------------------------------------------------- 00:32:59.503 Suppressions used: 00:32:59.503 count bytes template 00:32:59.503 1 11 /usr/src/fio/parse.c 00:32:59.503 1 8 libtcmalloc_minimal.so 00:32:59.503 1 904 libcrypto.so 00:32:59.503 ----------------------------------------------------- 00:32:59.503 00:32:59.503 00:32:59.503 real 0m14.586s 00:32:59.503 user 0m9.944s 00:32:59.503 sys 0m3.961s 00:32:59.503 ************************************ 00:32:59.503 END TEST xnvme_fio_plugin 00:32:59.503 ************************************ 00:32:59.503 06:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.503 06:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:32:59.762 06:59:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:32:59.762 06:59:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:59.762 06:59:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.762 06:59:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:59.762 ************************************ 00:32:59.762 START TEST xnvme_rpc 00:32:59.762 ************************************ 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72234 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72234 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72234 ']' 00:32:59.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.762 06:59:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:59.762 [2024-12-06 06:59:32.268550] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:32:59.762 [2024-12-06 06:59:32.268975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:33:00.021 [2024-12-06 06:59:32.456994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.021 [2024-12-06 06:59:32.585996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.956 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.956 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:00.956 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:33:00.956 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.956 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:00.957 xnvme_bdev 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.957 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72234 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72234 ']' 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72234 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72234 00:33:01.242 killing process with pid 72234 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72234' 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72234 00:33:01.242 06:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72234 00:33:03.795 00:33:03.795 real 0m3.641s 00:33:03.795 user 0m4.019s 00:33:03.795 sys 0m0.479s 00:33:03.795 ************************************ 00:33:03.795 END TEST xnvme_rpc 00:33:03.795 ************************************ 00:33:03.795 06:59:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.795 06:59:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:03.795 06:59:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:33:03.795 06:59:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:03.795 06:59:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.795 06:59:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:03.795 ************************************ 00:33:03.795 START TEST xnvme_bdevperf 00:33:03.795 ************************************ 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:03.795 06:59:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.795 { 00:33:03.795 "subsystems": [ 00:33:03.795 { 00:33:03.795 "subsystem": "bdev", 00:33:03.795 "config": [ 00:33:03.795 { 00:33:03.795 "params": { 00:33:03.795 "io_mechanism": "io_uring_cmd", 00:33:03.795 "conserve_cpu": false, 00:33:03.796 "filename": "/dev/ng0n1", 00:33:03.796 "name": "xnvme_bdev" 00:33:03.796 }, 00:33:03.796 "method": "bdev_xnvme_create" 00:33:03.796 }, 00:33:03.796 { 00:33:03.796 "method": "bdev_wait_for_examine" 00:33:03.796 } 00:33:03.796 ] 00:33:03.796 } 00:33:03.796 ] 00:33:03.796 } 00:33:03.796 [2024-12-06 06:59:35.952215] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:03.796 [2024-12-06 06:59:35.952565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72308 ] 00:33:03.796 [2024-12-06 06:59:36.144682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.796 [2024-12-06 06:59:36.275318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.054 Running I/O for 5 seconds... 00:33:06.364 52339.00 IOPS, 204.45 MiB/s [2024-12-06T06:59:39.891Z] 51321.50 IOPS, 200.47 MiB/s [2024-12-06T06:59:40.824Z] 50726.33 IOPS, 198.15 MiB/s [2024-12-06T06:59:41.760Z] 51660.75 IOPS, 201.80 MiB/s 00:33:09.169 Latency(us) 00:33:09.169 [2024-12-06T06:59:41.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.169 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:33:09.169 xnvme_bdev : 5.00 51498.76 201.17 0.00 0.00 1238.60 580.89 3842.79 00:33:09.169 [2024-12-06T06:59:41.760Z] =================================================================================================================== 00:33:09.169 [2024-12-06T06:59:41.760Z] Total : 51498.76 201.17 0.00 0.00 1238.60 580.89 3842.79 00:33:10.106 06:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:10.106 06:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:33:10.106 06:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:10.106 06:59:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:10.106 06:59:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.106 { 00:33:10.106 "subsystems": [ 00:33:10.106 { 00:33:10.106 "subsystem": "bdev", 00:33:10.106 "config": [ 00:33:10.106 { 00:33:10.106 "params": { 00:33:10.106 "io_mechanism": "io_uring_cmd", 00:33:10.106 "conserve_cpu": false, 00:33:10.106 "filename": "/dev/ng0n1", 00:33:10.106 "name": "xnvme_bdev" 00:33:10.106 }, 00:33:10.106 "method": "bdev_xnvme_create" 00:33:10.106 }, 00:33:10.106 { 00:33:10.106 "method": "bdev_wait_for_examine" 00:33:10.106 } 00:33:10.106 ] 00:33:10.106 } 00:33:10.106 ] 00:33:10.106 } 00:33:10.106 [2024-12-06 06:59:42.694507] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:10.106 [2024-12-06 06:59:42.694920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:33:10.364 [2024-12-06 06:59:42.880997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.623 [2024-12-06 06:59:43.013941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.882 Running I/O for 5 seconds... 00:33:12.755 49216.00 IOPS, 192.25 MiB/s [2024-12-06T06:59:46.722Z] 48736.00 IOPS, 190.38 MiB/s [2024-12-06T06:59:47.655Z] 49344.00 IOPS, 192.75 MiB/s [2024-12-06T06:59:48.588Z] 49568.00 IOPS, 193.62 MiB/s [2024-12-06T06:59:48.588Z] 49977.60 IOPS, 195.22 MiB/s 00:33:15.997 Latency(us) 00:33:15.997 [2024-12-06T06:59:48.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.997 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:33:15.997 xnvme_bdev : 5.00 49968.43 195.19 0.00 0.00 1276.38 800.58 3157.64 00:33:15.997 [2024-12-06T06:59:48.588Z] =================================================================================================================== 00:33:15.997 [2024-12-06T06:59:48.588Z] Total : 49968.43 195.19 0.00 0.00 1276.38 800.58 3157.64 00:33:16.932 06:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:16.932 06:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:16.932 06:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:33:16.932 06:59:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:16.932 06:59:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.932 { 00:33:16.932 "subsystems": [ 00:33:16.932 { 00:33:16.932 "subsystem": "bdev", 00:33:16.932 "config": [ 00:33:16.932 { 00:33:16.932 "params": { 00:33:16.932 "io_mechanism": "io_uring_cmd", 00:33:16.932 "conserve_cpu": false, 00:33:16.932 "filename": "/dev/ng0n1", 00:33:16.932 "name": "xnvme_bdev" 00:33:16.932 }, 00:33:16.932 "method": "bdev_xnvme_create" 00:33:16.932 }, 00:33:16.932 { 00:33:16.932 "method": "bdev_wait_for_examine" 00:33:16.932 } 00:33:16.932 ] 00:33:16.932 } 00:33:16.932 ] 00:33:16.932 } 00:33:16.932 [2024-12-06 06:59:49.454202] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:16.932 [2024-12-06 06:59:49.454489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72468 ] 00:33:17.190 [2024-12-06 06:59:49.650922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.190 [2024-12-06 06:59:49.754415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.757 Running I/O for 5 seconds... 00:33:19.631 71040.00 IOPS, 277.50 MiB/s [2024-12-06T06:59:53.184Z] 71008.00 IOPS, 277.38 MiB/s [2024-12-06T06:59:54.121Z] 71253.33 IOPS, 278.33 MiB/s [2024-12-06T06:59:55.499Z] 71008.00 IOPS, 277.38 MiB/s [2024-12-06T06:59:55.499Z] 71065.60 IOPS, 277.60 MiB/s 00:33:22.908 Latency(us) 00:33:22.908 [2024-12-06T06:59:55.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.908 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:33:22.908 xnvme_bdev : 5.00 71037.01 277.49 0.00 0.00 897.02 525.03 3410.85 00:33:22.908 [2024-12-06T06:59:55.499Z] =================================================================================================================== 00:33:22.908 [2024-12-06T06:59:55.499Z] Total : 71037.01 277.49 0.00 0.00 897.02 525.03 3410.85 00:33:23.843 06:59:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:23.843 06:59:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:33:23.843 06:59:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:23.843 06:59:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:23.843 06:59:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.843 { 00:33:23.843 "subsystems": [ 00:33:23.843 { 00:33:23.843 "subsystem": "bdev", 00:33:23.843 "config": [ 00:33:23.843 { 00:33:23.843 "params": { 00:33:23.843 "io_mechanism": "io_uring_cmd", 00:33:23.843 "conserve_cpu": false, 00:33:23.843 "filename": "/dev/ng0n1", 00:33:23.843 "name": "xnvme_bdev" 00:33:23.843 }, 00:33:23.843 "method": "bdev_xnvme_create" 00:33:23.843 }, 00:33:23.843 { 00:33:23.843 "method": "bdev_wait_for_examine" 00:33:23.843 } 00:33:23.843 ] 00:33:23.843 } 00:33:23.843 ] 00:33:23.843 } 00:33:23.843 [2024-12-06 06:59:56.170172] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:23.843 [2024-12-06 06:59:56.171053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:33:23.843 [2024-12-06 06:59:56.362071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.102 [2024-12-06 06:59:56.504451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.361 Running I/O for 5 seconds... 00:33:26.668 44998.00 IOPS, 175.77 MiB/s [2024-12-06T07:00:00.191Z] 45240.50 IOPS, 176.72 MiB/s [2024-12-06T07:00:01.127Z] 45845.67 IOPS, 179.08 MiB/s [2024-12-06T07:00:02.063Z] 46090.50 IOPS, 180.04 MiB/s [2024-12-06T07:00:02.063Z] 45913.80 IOPS, 179.35 MiB/s 00:33:29.472 Latency(us) 00:33:29.472 [2024-12-06T07:00:02.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.472 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:33:29.472 xnvme_bdev : 5.00 45882.34 179.23 0.00 0.00 1390.60 342.57 8817.57 00:33:29.472 [2024-12-06T07:00:02.063Z] =================================================================================================================== 00:33:29.472 [2024-12-06T07:00:02.063Z] Total : 45882.34 179.23 0.00 0.00 1390.60 342.57 8817.57 00:33:30.409 00:33:30.409 real 0m27.073s 00:33:30.409 user 0m15.593s 00:33:30.409 sys 0m11.047s 00:33:30.409 07:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.409 07:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:30.409 ************************************ 00:33:30.409 END TEST xnvme_bdevperf 00:33:30.409 ************************************ 00:33:30.409 07:00:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:33:30.409 07:00:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:30.409 07:00:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.409 07:00:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:30.409 ************************************ 00:33:30.409 START TEST xnvme_fio_plugin 00:33:30.409 ************************************ 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:30.409 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:30.667 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:30.667 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:30.667 07:00:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:33:30.667 07:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:30.667 07:00:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:30.667 { 00:33:30.667 "subsystems": [ 00:33:30.667 { 00:33:30.667 "subsystem": "bdev", 00:33:30.667 "config": [ 00:33:30.667 { 00:33:30.667 "params": { 00:33:30.667 "io_mechanism": "io_uring_cmd", 00:33:30.667 "conserve_cpu": false, 00:33:30.667 "filename": "/dev/ng0n1", 00:33:30.667 "name": "xnvme_bdev" 00:33:30.667 }, 00:33:30.667 "method": "bdev_xnvme_create" 00:33:30.667 }, 00:33:30.667 { 00:33:30.667 "method": "bdev_wait_for_examine" 00:33:30.667 } 00:33:30.667 ] 00:33:30.667 } 00:33:30.667 ] 00:33:30.667 } 00:33:30.667 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:33:30.667 fio-3.35 00:33:30.667 Starting 1 thread 00:33:37.228 00:33:37.228 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72661: Fri Dec 6 07:00:08 2024 00:33:37.228 read: IOPS=54.5k, BW=213MiB/s (223MB/s)(1064MiB/5001msec) 00:33:37.228 slat (usec): min=2, max=117, avg= 3.98, stdev= 1.94 00:33:37.228 clat (usec): min=174, max=4721, avg=1018.86, stdev=160.82 00:33:37.228 lat (usec): min=182, max=4725, avg=1022.84, stdev=161.09 00:33:37.228 clat percentiles (usec): 00:33:37.228 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 873], 20.00th=[ 914], 00:33:37.228 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1037], 00:33:37.228 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1221], 00:33:37.228 | 99.00th=[ 1467], 99.50th=[ 1729], 99.90th=[ 2966], 99.95th=[ 3359], 00:33:37.228 | 99.99th=[ 4047] 00:33:37.228 bw ( KiB/s): min=203936, max=228888, per=99.53%, avg=216810.67, stdev=7605.00, samples=9 00:33:37.228 iops : min=50984, max=57222, avg=54202.67, stdev=1901.25, samples=9 00:33:37.228 lat (usec) : 250=0.01%, 500=0.08%, 750=0.62%, 1000=47.85% 00:33:37.228 lat (msec) : 2=51.13%, 4=0.30%, 10=0.01% 00:33:37.228 cpu : usr=45.00%, sys=54.06%, ctx=8, majf=0, minf=762 00:33:37.228 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.2%, 16=24.7%, 32=51.1%, >=64=1.7% 00:33:37.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.228 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:33:37.228 issued rwts: total=272353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:37.228 00:33:37.228 Run status group 0 (all jobs): 00:33:37.228 READ: bw=213MiB/s (223MB/s), 213MiB/s-213MiB/s (223MB/s-223MB/s), io=1064MiB (1116MB), run=5001-5001msec 00:33:37.797 ----------------------------------------------------- 00:33:37.797 Suppressions used: 00:33:37.797 count bytes template 00:33:37.797 1 11 /usr/src/fio/parse.c 00:33:37.797 1 8 libtcmalloc_minimal.so 00:33:37.797 1 904 libcrypto.so 00:33:37.797 ----------------------------------------------------- 00:33:37.797 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:37.797 07:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:37.797 { 00:33:37.797 "subsystems": [ 00:33:37.797 { 00:33:37.797 "subsystem": "bdev", 00:33:37.797 "config": [ 00:33:37.797 { 00:33:37.797 "params": { 00:33:37.797 "io_mechanism": "io_uring_cmd", 00:33:37.797 "conserve_cpu": false, 00:33:37.797 "filename": "/dev/ng0n1", 00:33:37.797 "name": "xnvme_bdev" 00:33:37.797 }, 00:33:37.797 "method": "bdev_xnvme_create" 00:33:37.797 }, 00:33:37.797 { 00:33:37.797 "method": "bdev_wait_for_examine" 00:33:37.797 } 00:33:37.797 ] 00:33:37.797 } 00:33:37.797 ] 00:33:37.797 } 00:33:38.056 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:33:38.056 fio-3.35 00:33:38.056 Starting 1 thread 00:33:44.674 00:33:44.674 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72754: Fri Dec 6 07:00:16 2024 00:33:44.674 write: IOPS=42.1k, BW=164MiB/s (172MB/s)(823MiB/5001msec); 0 zone resets 00:33:44.674 slat (nsec): min=2592, max=71808, avg=5385.73, stdev=2794.42 00:33:44.674 clat (usec): min=298, max=3296, avg=1304.77, stdev=196.74 00:33:44.674 lat (usec): min=302, max=3303, avg=1310.16, stdev=197.86 00:33:44.674 clat percentiles (usec): 00:33:44.674 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1139], 00:33:44.674 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1319], 00:33:44.674 | 70.00th=[ 1385], 80.00th=[ 1467], 90.00th=[ 1582], 95.00th=[ 1680], 00:33:44.674 | 99.00th=[ 1827], 99.50th=[ 1893], 99.90th=[ 2114], 99.95th=[ 2442], 00:33:44.674 | 99.99th=[ 2769] 00:33:44.674 bw ( KiB/s): min=160768, max=175104, per=100.00%, avg=168868.44, stdev=4993.40, samples=9 00:33:44.674 iops : min=40192, max=43776, avg=42217.11, stdev=1248.35, samples=9 00:33:44.674 lat (usec) : 500=0.01%, 750=0.03%, 1000=1.63% 00:33:44.674 lat (msec) : 2=98.13%, 4=0.21% 00:33:44.674 cpu : usr=43.83%, sys=54.91%, ctx=10, majf=0, minf=763 00:33:44.674 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:33:44.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.674 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:33:44.674 issued rwts: total=0,210582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.674 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:44.674 00:33:44.674 Run status group 0 (all jobs): 00:33:44.674 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=823MiB (863MB), run=5001-5001msec 00:33:44.934 ----------------------------------------------------- 00:33:44.934 Suppressions used: 00:33:44.934 count bytes template 00:33:44.934 1 11 /usr/src/fio/parse.c 00:33:44.934 1 8 libtcmalloc_minimal.so 00:33:44.934 1 904 libcrypto.so 00:33:44.934 ----------------------------------------------------- 00:33:44.934 00:33:44.934 00:33:44.934 real 0m14.363s 00:33:44.934 user 0m7.922s 00:33:44.934 sys 0m6.040s 00:33:44.934 07:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.934 07:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:44.934 ************************************ 00:33:44.934 END TEST xnvme_fio_plugin 00:33:44.934 ************************************ 00:33:44.934 07:00:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:33:44.934 07:00:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:33:44.934 07:00:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:33:44.934 07:00:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:33:44.934 07:00:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:44.934 07:00:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.934 07:00:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:44.934 ************************************ 00:33:44.934 START TEST xnvme_rpc 00:33:44.934 ************************************ 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:33:44.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72845 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72845 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72845 ']' 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.934 07:00:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:44.934 [2024-12-06 07:00:17.514851] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:44.934 [2024-12-06 07:00:17.515277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72845 ] 00:33:45.193 [2024-12-06 07:00:17.697427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.453 [2024-12-06 07:00:17.793505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.021 xnvme_bdev 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.021 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:33:46.280 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72845 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72845 ']' 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72845 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72845 00:33:46.281 killing process with pid 72845 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72845' 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72845 00:33:46.281 07:00:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72845 00:33:48.188 00:33:48.188 real 0m3.205s 00:33:48.188 user 0m3.527s 00:33:48.188 sys 0m0.419s 00:33:48.188 07:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.188 07:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 ************************************ 00:33:48.188 END TEST xnvme_rpc 00:33:48.188 ************************************ 00:33:48.188 07:00:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:33:48.188 07:00:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:48.188 07:00:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.188 07:00:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 ************************************ 00:33:48.188 START TEST xnvme_bdevperf 00:33:48.188 ************************************ 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:48.188 07:00:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:48.188 { 00:33:48.188 "subsystems": [ 00:33:48.188 { 00:33:48.188 "subsystem": "bdev", 00:33:48.188 "config": [ 00:33:48.188 { 00:33:48.188 "params": { 00:33:48.188 "io_mechanism": "io_uring_cmd", 00:33:48.188 "conserve_cpu": true, 00:33:48.188 "filename": "/dev/ng0n1", 00:33:48.188 "name": "xnvme_bdev" 00:33:48.188 }, 00:33:48.188 "method": "bdev_xnvme_create" 00:33:48.188 }, 00:33:48.188 { 00:33:48.188 "method": "bdev_wait_for_examine" 00:33:48.188 } 00:33:48.188 ] 00:33:48.188 } 00:33:48.188 ] 00:33:48.188 } 00:33:48.188 [2024-12-06 07:00:20.737874] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:48.188 [2024-12-06 07:00:20.737996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72914 ] 00:33:48.449 [2024-12-06 07:00:20.908517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.449 [2024-12-06 07:00:20.997059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.707 Running I/O for 5 seconds... 00:33:51.008 51979.00 IOPS, 203.04 MiB/s [2024-12-06T07:00:24.532Z] 54106.50 IOPS, 211.35 MiB/s [2024-12-06T07:00:25.465Z] 53872.33 IOPS, 210.44 MiB/s [2024-12-06T07:00:26.399Z] 54260.25 IOPS, 211.95 MiB/s [2024-12-06T07:00:26.399Z] 54051.60 IOPS, 211.14 MiB/s 00:33:53.808 Latency(us) 00:33:53.808 [2024-12-06T07:00:26.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.809 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:33:53.809 xnvme_bdev : 5.00 54038.35 211.09 0.00 0.00 1180.47 208.52 5034.36 00:33:53.809 [2024-12-06T07:00:26.400Z] =================================================================================================================== 00:33:53.809 [2024-12-06T07:00:26.400Z] Total : 54038.35 211.09 0.00 0.00 1180.47 208.52 5034.36 00:33:54.746 07:00:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:54.746 07:00:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:33:54.746 07:00:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:54.746 07:00:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:54.746 07:00:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.746 { 00:33:54.746 "subsystems": [ 00:33:54.746 { 00:33:54.746 "subsystem": "bdev", 00:33:54.746 "config": [ 00:33:54.746 { 00:33:54.746 "params": { 00:33:54.746 "io_mechanism": "io_uring_cmd", 00:33:54.746 "conserve_cpu": true, 00:33:54.746 "filename": "/dev/ng0n1", 00:33:54.746 "name": "xnvme_bdev" 00:33:54.746 }, 00:33:54.746 "method": "bdev_xnvme_create" 00:33:54.746 }, 00:33:54.746 { 00:33:54.746 "method": "bdev_wait_for_examine" 00:33:54.746 } 00:33:54.746 ] 00:33:54.746 } 00:33:54.746 ] 00:33:54.746 } 00:33:54.746 [2024-12-06 07:00:27.302815] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:33:54.746 [2024-12-06 07:00:27.302977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:33:55.005 [2024-12-06 07:00:27.486256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.005 [2024-12-06 07:00:27.579323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.574 Running I/O for 5 seconds... 00:33:57.500 39551.00 IOPS, 154.50 MiB/s [2024-12-06T07:00:31.028Z] 39519.50 IOPS, 154.37 MiB/s [2024-12-06T07:00:31.964Z] 39317.00 IOPS, 153.58 MiB/s [2024-12-06T07:00:32.900Z] 39055.75 IOPS, 152.56 MiB/s [2024-12-06T07:00:32.900Z] 38835.00 IOPS, 151.70 MiB/s 00:34:00.309 Latency(us) 00:34:00.309 [2024-12-06T07:00:32.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.309 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:34:00.309 xnvme_bdev : 5.01 38783.88 151.50 0.00 0.00 1644.04 860.16 6464.23 00:34:00.309 [2024-12-06T07:00:32.900Z] =================================================================================================================== 00:34:00.309 [2024-12-06T07:00:32.900Z] Total : 38783.88 151.50 0.00 0.00 1644.04 860.16 6464.23 00:34:01.686 07:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:34:01.686 07:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:34:01.686 07:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:34:01.686 07:00:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:34:01.686 07:00:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.686 { 00:34:01.686 "subsystems": [ 00:34:01.686 { 00:34:01.686 "subsystem": "bdev", 00:34:01.686 "config": [ 00:34:01.686 { 00:34:01.686 "params": { 00:34:01.686 "io_mechanism": "io_uring_cmd", 00:34:01.686 "conserve_cpu": true, 00:34:01.686 "filename": "/dev/ng0n1", 00:34:01.686 "name": "xnvme_bdev" 00:34:01.686 }, 00:34:01.686 "method": "bdev_xnvme_create" 00:34:01.686 }, 00:34:01.686 { 00:34:01.686 "method": "bdev_wait_for_examine" 00:34:01.686 } 00:34:01.686 ] 00:34:01.686 } 00:34:01.686 ] 00:34:01.686 } 00:34:01.686 [2024-12-06 07:00:34.084884] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:01.686 [2024-12-06 07:00:34.085856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73071 ] 00:34:01.686 [2024-12-06 07:00:34.269559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.946 [2024-12-06 07:00:34.363296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.205 Running I/O for 5 seconds... 00:34:04.073 87872.00 IOPS, 343.25 MiB/s [2024-12-06T07:00:38.048Z] 80544.00 IOPS, 314.62 MiB/s [2024-12-06T07:00:38.986Z] 78656.00 IOPS, 307.25 MiB/s [2024-12-06T07:00:39.918Z] 76704.00 IOPS, 299.62 MiB/s 00:34:07.327 Latency(us) 00:34:07.327 [2024-12-06T07:00:39.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.327 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:34:07.327 xnvme_bdev : 5.00 75715.34 295.76 0.00 0.00 841.58 437.53 3172.54 00:34:07.327 [2024-12-06T07:00:39.918Z] =================================================================================================================== 00:34:07.327 [2024-12-06T07:00:39.918Z] Total : 75715.34 295.76 0.00 0.00 841.58 437.53 3172.54 00:34:08.263 07:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:34:08.263 07:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:34:08.263 07:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:34:08.263 07:00:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:34:08.263 07:00:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.263 { 00:34:08.263 "subsystems": [ 00:34:08.263 { 00:34:08.263 "subsystem": "bdev", 00:34:08.263 "config": [ 00:34:08.263 { 00:34:08.263 "params": { 00:34:08.263 "io_mechanism": "io_uring_cmd", 00:34:08.263 "conserve_cpu": true, 00:34:08.263 "filename": "/dev/ng0n1", 00:34:08.263 "name": "xnvme_bdev" 00:34:08.263 }, 00:34:08.263 "method": "bdev_xnvme_create" 00:34:08.263 }, 00:34:08.263 { 00:34:08.263 "method": "bdev_wait_for_examine" 00:34:08.263 } 00:34:08.263 ] 00:34:08.263 } 00:34:08.263 ] 00:34:08.263 } 00:34:08.263 [2024-12-06 07:00:40.637517] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:08.263 [2024-12-06 07:00:40.637696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73146 ] 00:34:08.263 [2024-12-06 07:00:40.816222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.522 [2024-12-06 07:00:40.924177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.781 Running I/O for 5 seconds... 00:34:11.089 42806.00 IOPS, 167.21 MiB/s [2024-12-06T07:00:44.613Z] 42762.50 IOPS, 167.04 MiB/s [2024-12-06T07:00:45.546Z] 42785.67 IOPS, 167.13 MiB/s [2024-12-06T07:00:46.479Z] 43024.00 IOPS, 168.06 MiB/s 00:34:13.888 Latency(us) 00:34:13.888 [2024-12-06T07:00:46.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.888 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:34:13.888 xnvme_bdev : 5.00 43062.39 168.21 0.00 0.00 1480.60 133.12 12690.15 00:34:13.888 [2024-12-06T07:00:46.479Z] =================================================================================================================== 00:34:13.888 [2024-12-06T07:00:46.479Z] Total : 43062.39 168.21 0.00 0.00 1480.60 133.12 12690.15 00:34:14.825 00:34:14.825 real 0m26.666s 00:34:14.825 user 0m18.454s 00:34:14.825 sys 0m6.370s 00:34:14.825 ************************************ 00:34:14.825 END TEST xnvme_bdevperf 00:34:14.825 ************************************ 00:34:14.825 07:00:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.825 07:00:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.825 07:00:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:34:14.825 07:00:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.825 07:00:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.825 07:00:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:14.825 ************************************ 00:34:14.825 START TEST xnvme_fio_plugin 00:34:14.825 ************************************ 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.825 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:14.826 07:00:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:15.085 { 00:34:15.085 "subsystems": [ 00:34:15.085 { 00:34:15.085 "subsystem": "bdev", 00:34:15.085 "config": [ 00:34:15.085 { 00:34:15.085 "params": { 00:34:15.085 "io_mechanism": "io_uring_cmd", 00:34:15.085 "conserve_cpu": true, 00:34:15.085 "filename": "/dev/ng0n1", 00:34:15.085 "name": "xnvme_bdev" 00:34:15.085 }, 00:34:15.085 "method": "bdev_xnvme_create" 00:34:15.085 }, 00:34:15.085 { 00:34:15.085 "method": "bdev_wait_for_examine" 00:34:15.085 } 00:34:15.085 ] 00:34:15.085 } 00:34:15.085 ] 00:34:15.085 } 00:34:15.085 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:34:15.085 fio-3.35 00:34:15.085 Starting 1 thread 00:34:21.650 00:34:21.650 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73265: Fri Dec 6 07:00:53 2024 00:34:21.650 read: IOPS=49.2k, BW=192MiB/s (202MB/s)(962MiB/5001msec) 00:34:21.650 slat (nsec): min=2451, max=83693, avg=4074.25, stdev=2156.12 00:34:21.650 clat (usec): min=89, max=2507, avg=1136.77, stdev=141.28 00:34:21.650 lat (usec): min=93, max=2541, avg=1140.85, stdev=141.74 00:34:21.650 clat percentiles (usec): 00:34:21.650 | 1.00th=[ 889], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1020], 00:34:21.650 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:34:21.650 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1369], 00:34:21.650 | 99.00th=[ 1614], 99.50th=[ 1713], 99.90th=[ 1991], 99.95th=[ 2114], 00:34:21.650 | 99.99th=[ 2311] 00:34:21.650 bw ( KiB/s): min=186368, max=213504, per=100.00%, avg=196964.56, stdev=8650.26, samples=9 00:34:21.650 iops : min=46592, max=53376, avg=49241.11, stdev=2162.60, samples=9 00:34:21.650 lat (usec) : 100=0.01%, 1000=14.15% 00:34:21.650 lat (msec) : 2=85.75%, 4=0.10% 00:34:21.650 cpu : usr=62.24%, sys=34.46%, ctx=8, majf=0, minf=762 00:34:21.650 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:34:21.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.650 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:34:21.650 issued rwts: total=246147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.650 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:21.650 00:34:21.650 Run status group 0 (all jobs): 00:34:21.650 READ: bw=192MiB/s (202MB/s), 192MiB/s-192MiB/s (202MB/s-202MB/s), io=962MiB (1008MB), run=5001-5001msec 00:34:22.217 ----------------------------------------------------- 00:34:22.217 Suppressions used: 00:34:22.217 count bytes template 00:34:22.217 1 11 /usr/src/fio/parse.c 00:34:22.217 1 8 libtcmalloc_minimal.so 00:34:22.217 1 904 libcrypto.so 00:34:22.217 ----------------------------------------------------- 00:34:22.217 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:22.217 07:00:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:34:22.217 { 00:34:22.217 "subsystems": [ 00:34:22.217 { 00:34:22.217 "subsystem": "bdev", 00:34:22.217 "config": [ 00:34:22.217 { 00:34:22.217 "params": { 00:34:22.217 "io_mechanism": "io_uring_cmd", 00:34:22.217 "conserve_cpu": true, 00:34:22.217 "filename": "/dev/ng0n1", 00:34:22.217 "name": "xnvme_bdev" 00:34:22.217 }, 00:34:22.217 "method": "bdev_xnvme_create" 00:34:22.217 }, 00:34:22.217 { 00:34:22.217 "method": "bdev_wait_for_examine" 00:34:22.217 } 00:34:22.217 ] 00:34:22.217 } 00:34:22.217 ] 00:34:22.217 } 00:34:22.475 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:34:22.475 fio-3.35 00:34:22.475 Starting 1 thread 00:34:29.039 00:34:29.039 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73364: Fri Dec 6 07:01:00 2024 00:34:29.039 write: IOPS=44.4k, BW=174MiB/s (182MB/s)(868MiB/5001msec); 0 zone resets 00:34:29.039 slat (usec): min=2, max=391, avg= 4.96, stdev= 3.72 00:34:29.039 clat (usec): min=73, max=11578, avg=1253.18, stdev=397.03 00:34:29.039 lat (usec): min=78, max=11582, avg=1258.13, stdev=397.43 00:34:29.039 clat percentiles (usec): 00:34:29.039 | 1.00th=[ 506], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1074], 00:34:29.039 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1237], 00:34:29.039 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1516], 95.00th=[ 1745], 00:34:29.039 | 99.00th=[ 2999], 99.50th=[ 3490], 99.90th=[ 4752], 99.95th=[ 5473], 00:34:29.039 | 99.99th=[10552] 00:34:29.039 bw ( KiB/s): min=164360, max=188416, per=100.00%, avg=177845.33, stdev=7998.53, samples=9 00:34:29.039 iops : min=41090, max=47104, avg=44461.33, stdev=1999.63, samples=9 00:34:29.039 lat (usec) : 100=0.01%, 250=0.21%, 500=0.76%, 750=1.23%, 1000=5.91% 00:34:29.040 lat (msec) : 2=88.60%, 4=3.06%, 10=0.21%, 20=0.02% 00:34:29.040 cpu : usr=60.12%, sys=34.02%, ctx=18, majf=0, minf=763 00:34:29.040 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.6%, 16=23.8%, 32=52.7%, >=64=2.0% 00:34:29.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.040 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:34:29.040 issued rwts: total=0,222258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:29.040 00:34:29.040 Run status group 0 (all jobs): 00:34:29.040 WRITE: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=868MiB (910MB), run=5001-5001msec 00:34:29.608 ----------------------------------------------------- 00:34:29.608 Suppressions used: 00:34:29.608 count bytes template 00:34:29.608 1 11 /usr/src/fio/parse.c 00:34:29.608 1 8 libtcmalloc_minimal.so 00:34:29.608 1 904 libcrypto.so 00:34:29.608 ----------------------------------------------------- 00:34:29.608 00:34:29.608 00:34:29.608 real 0m14.620s 00:34:29.608 user 0m9.849s 00:34:29.608 sys 0m4.032s 00:34:29.608 07:01:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.608 ************************************ 00:34:29.608 END TEST xnvme_fio_plugin 00:34:29.608 ************************************ 00:34:29.608 07:01:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 Process with pid 72845 is not found 00:34:29.608 07:01:02 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72845 00:34:29.608 07:01:02 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72845 ']' 00:34:29.608 07:01:02 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72845 00:34:29.608 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72845) - No such process 00:34:29.608 07:01:02 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72845 is not found' 00:34:29.608 00:34:29.608 real 3m44.572s 00:34:29.608 user 2m13.488s 00:34:29.608 sys 1m14.929s 00:34:29.608 07:01:02 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.608 ************************************ 00:34:29.608 END TEST nvme_xnvme 00:34:29.608 ************************************ 00:34:29.608 07:01:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 07:01:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:34:29.608 07:01:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:29.608 07:01:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.608 07:01:02 -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 ************************************ 00:34:29.608 START TEST blockdev_xnvme 00:34:29.608 ************************************ 00:34:29.608 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:34:29.608 * Looking for test storage... 00:34:29.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:29.608 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:29.608 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:34:29.608 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.867 07:01:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.867 --rc genhtml_branch_coverage=1 00:34:29.867 --rc genhtml_function_coverage=1 00:34:29.867 --rc genhtml_legend=1 00:34:29.867 --rc geninfo_all_blocks=1 00:34:29.867 --rc geninfo_unexecuted_blocks=1 00:34:29.867 00:34:29.867 ' 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.867 --rc genhtml_branch_coverage=1 00:34:29.867 --rc genhtml_function_coverage=1 00:34:29.867 --rc genhtml_legend=1 00:34:29.867 --rc geninfo_all_blocks=1 00:34:29.867 --rc geninfo_unexecuted_blocks=1 00:34:29.867 00:34:29.867 ' 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.867 --rc genhtml_branch_coverage=1 00:34:29.867 --rc genhtml_function_coverage=1 00:34:29.867 --rc genhtml_legend=1 00:34:29.867 --rc geninfo_all_blocks=1 00:34:29.867 --rc geninfo_unexecuted_blocks=1 00:34:29.867 00:34:29.867 ' 00:34:29.867 07:01:02 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:29.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.867 --rc genhtml_branch_coverage=1 00:34:29.867 --rc genhtml_function_coverage=1 00:34:29.867 --rc genhtml_legend=1 00:34:29.867 --rc geninfo_all_blocks=1 00:34:29.867 --rc geninfo_unexecuted_blocks=1 00:34:29.867 00:34:29.867 ' 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:34:29.867 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:34:29.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73493 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73493 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73493 ']' 00:34:29.868 07:01:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:29.868 07:01:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:29.868 [2024-12-06 07:01:02.412762] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:29.868 [2024-12-06 07:01:02.413305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73493 ] 00:34:30.126 [2024-12-06 07:01:02.603736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.385 [2024-12-06 07:01:02.727031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.952 07:01:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.952 07:01:03 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:34:30.952 07:01:03 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:34:30.952 07:01:03 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:34:30.952 07:01:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:34:30.952 07:01:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:34:30.952 07:01:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:31.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:32.086 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:32.087 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:32.087 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:32.087 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:34:32.087 nvme0n1 00:34:32.087 nvme0n2 00:34:32.087 nvme0n3 00:34:32.087 nvme1n1 00:34:32.087 nvme2n1 00:34:32.087 nvme3n1 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.087 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.087 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5b16e0aa-ff22-4dd5-8c83-f46eed0ea8e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b16e0aa-ff22-4dd5-8c83-f46eed0ea8e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "1d30d5ad-e07b-4744-b49c-d41127a23847"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d30d5ad-e07b-4744-b49c-d41127a23847",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "eda622ea-0995-4dc3-aabb-f1bab32c8387"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eda622ea-0995-4dc3-aabb-f1bab32c8387",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "891b8dc6-8812-48df-8f47-71212c76e224"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "891b8dc6-8812-48df-8f47-71212c76e224",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "96632ece-6ea4-4ab5-8316-d800226683f7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "96632ece-6ea4-4ab5-8316-d800226683f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4175752c-8e7c-4c9b-b8f7-f187e67790d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4175752c-8e7c-4c9b-b8f7-f187e67790d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:34:32.348 07:01:04 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73493 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73493 ']' 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73493 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73493 00:34:32.348 killing process with pid 73493 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73493' 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73493 00:34:32.348 07:01:04 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73493 00:34:34.253 07:01:06 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:34.253 07:01:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:34:34.253 07:01:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:34.253 07:01:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.253 07:01:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:34.253 ************************************ 00:34:34.253 START TEST bdev_hello_world 00:34:34.253 ************************************ 00:34:34.253 07:01:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:34:34.253 [2024-12-06 07:01:06.690297] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:34.253 [2024-12-06 07:01:06.690444] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73777 ] 00:34:34.511 [2024-12-06 07:01:06.851844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.511 [2024-12-06 07:01:06.936700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.770 [2024-12-06 07:01:07.293896] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:34.770 [2024-12-06 07:01:07.293946] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:34:34.770 [2024-12-06 07:01:07.293967] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:34.770 [2024-12-06 07:01:07.296302] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:34.770 [2024-12-06 07:01:07.296661] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:34.770 [2024-12-06 07:01:07.296690] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:34.770 [2024-12-06 07:01:07.297309] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:34.770 00:34:34.770 [2024-12-06 07:01:07.297349] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:35.706 ************************************ 00:34:35.706 END TEST bdev_hello_world 00:34:35.706 ************************************ 00:34:35.706 00:34:35.706 real 0m1.513s 00:34:35.706 user 0m1.237s 00:34:35.706 sys 0m0.162s 00:34:35.706 07:01:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.706 07:01:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 07:01:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:34:35.706 07:01:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:35.706 07:01:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.706 07:01:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 ************************************ 00:34:35.706 START TEST bdev_bounds 00:34:35.706 ************************************ 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:34:35.706 Process bdevio pid: 73814 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73814 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73814' 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73814 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73814 ']' 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.706 07:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 [2024-12-06 07:01:08.255587] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:35.706 [2024-12-06 07:01:08.256041] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73814 ] 00:34:35.965 [2024-12-06 07:01:08.418881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:35.966 [2024-12-06 07:01:08.513462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.966 [2024-12-06 07:01:08.513567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.966 [2024-12-06 07:01:08.513578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.901 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.901 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:34:36.901 07:01:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:36.901 I/O targets: 00:34:36.901 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:36.901 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:36.901 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:36.901 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:34:36.901 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:34:36.901 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:34:36.901 00:34:36.901 00:34:36.901 CUnit - A unit testing framework for C - Version 2.1-3 00:34:36.901 http://cunit.sourceforge.net/ 00:34:36.901 00:34:36.901 00:34:36.901 Suite: bdevio tests on: nvme3n1 00:34:36.901 Test: blockdev write read block ...passed 00:34:36.901 Test: blockdev write zeroes read block ...passed 00:34:36.901 Test: blockdev write zeroes read no split ...passed 00:34:36.901 Test: blockdev write zeroes read split ...passed 00:34:36.901 Test: blockdev write zeroes read split partial ...passed 00:34:36.901 Test: blockdev reset ...passed 00:34:36.901 Test: blockdev write read 8 blocks ...passed 00:34:36.901 Test: blockdev write read size > 128k ...passed 00:34:36.901 Test: blockdev write read invalid size ...passed 00:34:36.901 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:36.901 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:36.901 Test: blockdev write read max offset ...passed 00:34:36.901 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:36.901 Test: blockdev writev readv 8 blocks ...passed 00:34:36.901 Test: blockdev writev readv 30 x 1block ...passed 00:34:36.901 Test: blockdev writev readv block ...passed 00:34:36.901 Test: blockdev writev readv size > 128k ...passed 00:34:36.901 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:36.901 Test: blockdev comparev and writev ...passed 00:34:36.901 Test: blockdev nvme passthru rw ...passed 00:34:36.901 Test: blockdev nvme passthru vendor specific ...passed 00:34:36.901 Test: blockdev nvme admin passthru ...passed 00:34:36.901 Test: blockdev copy ...passed 00:34:36.901 Suite: bdevio tests on: nvme2n1 00:34:36.901 Test: blockdev write read block ...passed 00:34:36.901 Test: blockdev write zeroes read block ...passed 00:34:36.901 Test: blockdev write zeroes read no split ...passed 00:34:36.901 Test: blockdev write zeroes read split ...passed 00:34:37.161 Test: blockdev write zeroes read split partial ...passed 00:34:37.161 Test: blockdev reset ...passed 00:34:37.161 Test: blockdev write read 8 blocks ...passed 00:34:37.161 Test: blockdev write read size > 128k ...passed 00:34:37.161 Test: blockdev write read invalid size ...passed 00:34:37.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.161 Test: blockdev write read max offset ...passed 00:34:37.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.161 Test: blockdev writev readv 8 blocks ...passed 00:34:37.161 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.161 Test: blockdev writev readv block ...passed 00:34:37.161 Test: blockdev writev readv size > 128k ...passed 00:34:37.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.161 Test: blockdev comparev and writev ...passed 00:34:37.161 Test: blockdev nvme passthru rw ...passed 00:34:37.161 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.161 Test: blockdev nvme admin passthru ...passed 00:34:37.161 Test: blockdev copy ...passed 00:34:37.161 Suite: bdevio tests on: nvme1n1 00:34:37.161 Test: blockdev write read block ...passed 00:34:37.161 Test: blockdev write zeroes read block ...passed 00:34:37.161 Test: blockdev write zeroes read no split ...passed 00:34:37.161 Test: blockdev write zeroes read split ...passed 00:34:37.161 Test: blockdev write zeroes read split partial ...passed 00:34:37.161 Test: blockdev reset ...passed 00:34:37.161 Test: blockdev write read 8 blocks ...passed 00:34:37.161 Test: blockdev write read size > 128k ...passed 00:34:37.161 Test: blockdev write read invalid size ...passed 00:34:37.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.161 Test: blockdev write read max offset ...passed 00:34:37.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.161 Test: blockdev writev readv 8 blocks ...passed 00:34:37.161 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.161 Test: blockdev writev readv block ...passed 00:34:37.161 Test: blockdev writev readv size > 128k ...passed 00:34:37.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.161 Test: blockdev comparev and writev ...passed 00:34:37.161 Test: blockdev nvme passthru rw ...passed 00:34:37.161 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.161 Test: blockdev nvme admin passthru ...passed 00:34:37.161 Test: blockdev copy ...passed 00:34:37.161 Suite: bdevio tests on: nvme0n3 00:34:37.161 Test: blockdev write read block ...passed 00:34:37.161 Test: blockdev write zeroes read block ...passed 00:34:37.161 Test: blockdev write zeroes read no split ...passed 00:34:37.161 Test: blockdev write zeroes read split ...passed 00:34:37.161 Test: blockdev write zeroes read split partial ...passed 00:34:37.161 Test: blockdev reset ...passed 00:34:37.161 Test: blockdev write read 8 blocks ...passed 00:34:37.161 Test: blockdev write read size > 128k ...passed 00:34:37.161 Test: blockdev write read invalid size ...passed 00:34:37.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.161 Test: blockdev write read max offset ...passed 00:34:37.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.161 Test: blockdev writev readv 8 blocks ...passed 00:34:37.161 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.161 Test: blockdev writev readv block ...passed 00:34:37.161 Test: blockdev writev readv size > 128k ...passed 00:34:37.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.161 Test: blockdev comparev and writev ...passed 00:34:37.161 Test: blockdev nvme passthru rw ...passed 00:34:37.161 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.161 Test: blockdev nvme admin passthru ...passed 00:34:37.161 Test: blockdev copy ...passed 00:34:37.161 Suite: bdevio tests on: nvme0n2 00:34:37.161 Test: blockdev write read block ...passed 00:34:37.161 Test: blockdev write zeroes read block ...passed 00:34:37.161 Test: blockdev write zeroes read no split ...passed 00:34:37.161 Test: blockdev write zeroes read split ...passed 00:34:37.161 Test: blockdev write zeroes read split partial ...passed 00:34:37.161 Test: blockdev reset ...passed 00:34:37.161 Test: blockdev write read 8 blocks ...passed 00:34:37.161 Test: blockdev write read size > 128k ...passed 00:34:37.161 Test: blockdev write read invalid size ...passed 00:34:37.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.161 Test: blockdev write read max offset ...passed 00:34:37.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.161 Test: blockdev writev readv 8 blocks ...passed 00:34:37.161 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.161 Test: blockdev writev readv block ...passed 00:34:37.161 Test: blockdev writev readv size > 128k ...passed 00:34:37.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.161 Test: blockdev comparev and writev ...passed 00:34:37.161 Test: blockdev nvme passthru rw ...passed 00:34:37.161 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.161 Test: blockdev nvme admin passthru ...passed 00:34:37.161 Test: blockdev copy ...passed 00:34:37.161 Suite: bdevio tests on: nvme0n1 00:34:37.161 Test: blockdev write read block ...passed 00:34:37.161 Test: blockdev write zeroes read block ...passed 00:34:37.161 Test: blockdev write zeroes read no split ...passed 00:34:37.161 Test: blockdev write zeroes read split ...passed 00:34:37.420 Test: blockdev write zeroes read split partial ...passed 00:34:37.420 Test: blockdev reset ...passed 00:34:37.420 Test: blockdev write read 8 blocks ...passed 00:34:37.420 Test: blockdev write read size > 128k ...passed 00:34:37.420 Test: blockdev write read invalid size ...passed 00:34:37.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.420 Test: blockdev write read max offset ...passed 00:34:37.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.420 Test: blockdev writev readv 8 blocks ...passed 00:34:37.420 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.420 Test: blockdev writev readv block ...passed 00:34:37.420 Test: blockdev writev readv size > 128k ...passed 00:34:37.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.420 Test: blockdev comparev and writev ...passed 00:34:37.420 Test: blockdev nvme passthru rw ...passed 00:34:37.420 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.420 Test: blockdev nvme admin passthru ...passed 00:34:37.420 Test: blockdev copy ...passed 00:34:37.420 00:34:37.420 Run Summary: Type Total Ran Passed Failed Inactive 00:34:37.420 suites 6 6 n/a 0 0 00:34:37.420 tests 138 138 138 0 0 00:34:37.420 asserts 780 780 780 0 n/a 00:34:37.420 00:34:37.420 Elapsed time = 1.164 seconds 00:34:37.420 0 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73814 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73814 ']' 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73814 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73814 00:34:37.420 killing process with pid 73814 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73814' 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73814 00:34:37.420 07:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73814 00:34:38.372 07:01:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:38.372 00:34:38.372 real 0m2.533s 00:34:38.372 user 0m6.578s 00:34:38.372 sys 0m0.310s 00:34:38.372 07:01:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.372 07:01:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:38.372 ************************************ 00:34:38.372 END TEST bdev_bounds 00:34:38.372 ************************************ 00:34:38.372 07:01:10 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:34:38.372 07:01:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:38.372 07:01:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.372 07:01:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:38.372 ************************************ 00:34:38.372 START TEST bdev_nbd 00:34:38.372 ************************************ 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:38.372 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73869 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73869 /var/tmp/spdk-nbd.sock 00:34:38.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73869 ']' 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.373 07:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:38.373 [2024-12-06 07:01:10.858012] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:34:38.373 [2024-12-06 07:01:10.858162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.632 [2024-12-06 07:01:11.024817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.632 [2024-12-06 07:01:11.108526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:39.201 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:39.461 1+0 records in 00:34:39.461 1+0 records out 00:34:39.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062042 s, 6.6 MB/s 00:34:39.461 07:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:39.461 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:39.720 1+0 records in 00:34:39.720 1+0 records out 00:34:39.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505369 s, 8.1 MB/s 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:39.720 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.290 1+0 records in 00:34:40.290 1+0 records out 00:34:40.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087536 s, 4.7 MB/s 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:40.290 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.550 1+0 records in 00:34:40.550 1+0 records out 00:34:40.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113228 s, 3.6 MB/s 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:40.550 07:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.810 1+0 records in 00:34:40.810 1+0 records out 00:34:40.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898016 s, 4.6 MB/s 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:40.810 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:41.069 1+0 records in 00:34:41.069 1+0 records out 00:34:41.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857122 s, 4.8 MB/s 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:41.069 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd0", 00:34:41.329 "bdev_name": "nvme0n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd1", 00:34:41.329 "bdev_name": "nvme0n2" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd2", 00:34:41.329 "bdev_name": "nvme0n3" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd3", 00:34:41.329 "bdev_name": "nvme1n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd4", 00:34:41.329 "bdev_name": "nvme2n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd5", 00:34:41.329 "bdev_name": "nvme3n1" 00:34:41.329 } 00:34:41.329 ]' 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd0", 00:34:41.329 "bdev_name": "nvme0n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd1", 00:34:41.329 "bdev_name": "nvme0n2" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd2", 00:34:41.329 "bdev_name": "nvme0n3" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd3", 00:34:41.329 "bdev_name": "nvme1n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd4", 00:34:41.329 "bdev_name": "nvme2n1" 00:34:41.329 }, 00:34:41.329 { 00:34:41.329 "nbd_device": "/dev/nbd5", 00:34:41.329 "bdev_name": "nvme3n1" 00:34:41.329 } 00:34:41.329 ]' 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.329 07:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.587 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.846 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:34:42.105 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:34:42.105 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:34:42.105 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:34:42.105 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:42.105 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:42.364 07:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:42.630 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:34:42.889 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:34:42.889 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:34:42.889 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:34:42.889 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:42.890 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:43.148 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:34:43.428 /dev/nbd0 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:43.428 1+0 records in 00:34:43.428 1+0 records out 00:34:43.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565251 s, 7.2 MB/s 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:43.428 07:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:34:43.699 /dev/nbd1 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:43.699 1+0 records in 00:34:43.699 1+0 records out 00:34:43.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640112 s, 6.4 MB/s 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:43.699 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:34:43.959 /dev/nbd10 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:43.959 1+0 records in 00:34:43.959 1+0 records out 00:34:43.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004957 s, 8.3 MB/s 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:43.959 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:34:44.218 /dev/nbd11 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:44.218 1+0 records in 00:34:44.218 1+0 records out 00:34:44.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466491 s, 8.8 MB/s 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:44.218 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:34:44.477 /dev/nbd12 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:44.477 1+0 records in 00:34:44.477 1+0 records out 00:34:44.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543666 s, 7.5 MB/s 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:44.477 07:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:34:44.736 /dev/nbd13 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:44.736 1+0 records in 00:34:44.736 1+0 records out 00:34:44.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729535 s, 5.6 MB/s 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:44.736 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:44.737 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:44.996 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd0", 00:34:44.996 "bdev_name": "nvme0n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd1", 00:34:44.996 "bdev_name": "nvme0n2" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd10", 00:34:44.996 "bdev_name": "nvme0n3" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd11", 00:34:44.996 "bdev_name": "nvme1n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd12", 00:34:44.996 "bdev_name": "nvme2n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd13", 00:34:44.996 "bdev_name": "nvme3n1" 00:34:44.996 } 00:34:44.996 ]' 00:34:44.996 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:44.996 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd0", 00:34:44.996 "bdev_name": "nvme0n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd1", 00:34:44.996 "bdev_name": "nvme0n2" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd10", 00:34:44.996 "bdev_name": "nvme0n3" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd11", 00:34:44.996 "bdev_name": "nvme1n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd12", 00:34:44.996 "bdev_name": "nvme2n1" 00:34:44.996 }, 00:34:44.996 { 00:34:44.996 "nbd_device": "/dev/nbd13", 00:34:44.996 "bdev_name": "nvme3n1" 00:34:44.996 } 00:34:44.997 ]' 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:44.997 /dev/nbd1 00:34:44.997 /dev/nbd10 00:34:44.997 /dev/nbd11 00:34:44.997 /dev/nbd12 00:34:44.997 /dev/nbd13' 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:44.997 /dev/nbd1 00:34:44.997 /dev/nbd10 00:34:44.997 /dev/nbd11 00:34:44.997 /dev/nbd12 00:34:44.997 /dev/nbd13' 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:44.997 256+0 records in 00:34:44.997 256+0 records out 00:34:44.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078335 s, 134 MB/s 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:44.997 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:45.256 256+0 records in 00:34:45.256 256+0 records out 00:34:45.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155402 s, 6.7 MB/s 00:34:45.256 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:45.256 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:45.515 256+0 records in 00:34:45.515 256+0 records out 00:34:45.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177276 s, 5.9 MB/s 00:34:45.515 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:45.515 07:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:34:45.515 256+0 records in 00:34:45.515 256+0 records out 00:34:45.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144321 s, 7.3 MB/s 00:34:45.515 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:45.515 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:34:45.775 256+0 records in 00:34:45.775 256+0 records out 00:34:45.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162258 s, 6.5 MB/s 00:34:45.775 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:45.775 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:34:45.775 256+0 records in 00:34:45.775 256+0 records out 00:34:45.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185493 s, 5.7 MB/s 00:34:45.775 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:45.775 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:34:46.034 256+0 records in 00:34:46.034 256+0 records out 00:34:46.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164642 s, 6.4 MB/s 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:46.034 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:46.602 07:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:46.861 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.120 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.380 07:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.639 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:47.898 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:34:48.157 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:48.417 malloc_lvol_verify 00:34:48.417 07:01:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:48.676 9f6df7e4-65db-48e5-8319-de4c14f65ca9 00:34:48.676 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:48.935 7fec5e0b-f00a-4514-ba1a-eb326b3bc80f 00:34:48.935 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:49.195 /dev/nbd0 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:34:49.195 mke2fs 1.47.0 (5-Feb-2023) 00:34:49.195 Discarding device blocks: 0/4096 done 00:34:49.195 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:49.195 00:34:49.195 Allocating group tables: 0/1 done 00:34:49.195 Writing inode tables: 0/1 done 00:34:49.195 Creating journal (1024 blocks): done 00:34:49.195 Writing superblocks and filesystem accounting information: 0/1 done 00:34:49.195 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:49.195 07:01:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73869 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73869 ']' 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73869 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.454 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73869 00:34:49.713 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.713 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.713 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73869' 00:34:49.713 killing process with pid 73869 00:34:49.713 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73869 00:34:49.713 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73869 00:34:50.685 07:01:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:50.685 00:34:50.685 real 0m12.157s 00:34:50.685 user 0m17.295s 00:34:50.685 sys 0m3.953s 00:34:50.685 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.685 07:01:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:50.685 ************************************ 00:34:50.685 END TEST bdev_nbd 00:34:50.685 ************************************ 00:34:50.685 07:01:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:34:50.685 07:01:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:34:50.685 07:01:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:34:50.685 07:01:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:34:50.685 07:01:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:50.685 07:01:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.685 07:01:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:50.685 ************************************ 00:34:50.685 START TEST bdev_fio 00:34:50.685 ************************************ 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:34:50.685 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:34:50.685 07:01:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:50.685 ************************************ 00:34:50.685 START TEST bdev_fio_rw_verify 00:34:50.685 ************************************ 00:34:50.685 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:50.686 07:01:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:50.686 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:50.686 fio-3.35 00:34:50.686 Starting 6 threads 00:35:02.884 00:35:02.884 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74289: Fri Dec 6 07:01:33 2024 00:35:02.884 read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1153MiB/10001msec) 00:35:02.884 slat (usec): min=2, max=943, avg= 7.55, stdev= 5.64 00:35:02.884 clat (usec): min=100, max=3706, avg=616.16, stdev=214.29 00:35:02.884 lat (usec): min=113, max=3714, avg=623.71, stdev=215.31 00:35:02.884 clat percentiles (usec): 00:35:02.884 | 50.000th=[ 644], 99.000th=[ 1074], 99.900th=[ 1516], 99.990th=[ 3458], 00:35:02.884 | 99.999th=[ 3621] 00:35:02.884 write: IOPS=29.9k, BW=117MiB/s (122MB/s)(1168MiB/10001msec); 0 zone resets 00:35:02.884 slat (usec): min=12, max=1127, avg=27.17, stdev=27.09 00:35:02.884 clat (usec): min=82, max=6513, avg=717.22, stdev=231.25 00:35:02.884 lat (usec): min=120, max=6552, avg=744.39, stdev=233.16 00:35:02.884 clat percentiles (usec): 00:35:02.884 | 50.000th=[ 742], 99.000th=[ 1287], 99.900th=[ 1975], 99.990th=[ 3654], 00:35:02.884 | 99.999th=[ 6456] 00:35:02.884 bw ( KiB/s): min=98542, max=143552, per=99.74%, avg=119306.26, stdev=2407.37, samples=114 00:35:02.884 iops : min=24634, max=35888, avg=29826.26, stdev=601.85, samples=114 00:35:02.884 lat (usec) : 100=0.01%, 250=2.81%, 500=21.68%, 750=37.00%, 1000=33.80% 00:35:02.884 lat (msec) : 2=4.65%, 4=0.06%, 10=0.01% 00:35:02.884 cpu : usr=59.19%, sys=27.16%, ctx=7786, majf=0, minf=25091 00:35:02.884 IO depths : 1=11.8%, 2=24.2%, 4=50.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.884 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.884 issued rwts: total=295113,299067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:02.884 00:35:02.884 Run status group 0 (all jobs): 00:35:02.884 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1153MiB (1209MB), run=10001-10001msec 00:35:02.884 WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=1168MiB (1225MB), run=10001-10001msec 00:35:02.884 ----------------------------------------------------- 00:35:02.884 Suppressions used: 00:35:02.884 count bytes template 00:35:02.884 6 48 /usr/src/fio/parse.c 00:35:02.884 3758 360768 /usr/src/fio/iolog.c 00:35:02.884 1 8 libtcmalloc_minimal.so 00:35:02.884 1 904 libcrypto.so 00:35:02.884 ----------------------------------------------------- 00:35:02.884 00:35:02.884 00:35:02.884 real 0m12.194s 00:35:02.884 user 0m37.290s 00:35:02.884 sys 0m16.597s 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:35:02.884 ************************************ 00:35:02.884 END TEST bdev_fio_rw_verify 00:35:02.884 ************************************ 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5b16e0aa-ff22-4dd5-8c83-f46eed0ea8e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b16e0aa-ff22-4dd5-8c83-f46eed0ea8e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "1d30d5ad-e07b-4744-b49c-d41127a23847"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d30d5ad-e07b-4744-b49c-d41127a23847",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "eda622ea-0995-4dc3-aabb-f1bab32c8387"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eda622ea-0995-4dc3-aabb-f1bab32c8387",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "891b8dc6-8812-48df-8f47-71212c76e224"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "891b8dc6-8812-48df-8f47-71212c76e224",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "96632ece-6ea4-4ab5-8316-d800226683f7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "96632ece-6ea4-4ab5-8316-d800226683f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4175752c-8e7c-4c9b-b8f7-f187e67790d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4175752c-8e7c-4c9b-b8f7-f187e67790d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:35:02.884 /home/vagrant/spdk_repo/spdk 00:35:02.884 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:35:02.885 07:01:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:35:02.885 00:35:02.885 real 0m12.376s 00:35:02.885 user 0m37.397s 00:35:02.885 sys 0m16.670s 00:35:02.885 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.885 07:01:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:02.885 ************************************ 00:35:02.885 END TEST bdev_fio 00:35:02.885 ************************************ 00:35:02.885 07:01:35 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:02.885 07:01:35 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:02.885 07:01:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:35:02.885 07:01:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.885 07:01:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:02.885 ************************************ 00:35:02.885 START TEST bdev_verify 00:35:02.885 ************************************ 00:35:02.885 07:01:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:03.143 [2024-12-06 07:01:35.501763] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:03.143 [2024-12-06 07:01:35.501938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74460 ] 00:35:03.143 [2024-12-06 07:01:35.682686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:03.402 [2024-12-06 07:01:35.774687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.402 [2024-12-06 07:01:35.774701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.661 Running I/O for 5 seconds... 00:35:05.978 23008.00 IOPS, 89.88 MiB/s [2024-12-06T07:01:39.525Z] 22816.00 IOPS, 89.12 MiB/s [2024-12-06T07:01:40.461Z] 22933.67 IOPS, 89.58 MiB/s [2024-12-06T07:01:41.398Z] 22920.25 IOPS, 89.53 MiB/s [2024-12-06T07:01:41.398Z] 22585.40 IOPS, 88.22 MiB/s 00:35:08.807 Latency(us) 00:35:08.807 [2024-12-06T07:01:41.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.807 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0x80000 00:35:08.807 nvme0n1 : 5.06 1670.53 6.53 0.00 0.00 76501.26 11617.75 69110.69 00:35:08.807 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x80000 length 0x80000 00:35:08.807 nvme0n1 : 5.04 1625.37 6.35 0.00 0.00 78625.69 11975.21 79119.83 00:35:08.807 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0x80000 00:35:08.807 nvme0n2 : 5.05 1672.11 6.53 0.00 0.00 76327.93 12153.95 71970.44 00:35:08.807 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x80000 length 0x80000 00:35:08.807 nvme0n2 : 5.04 1624.98 6.35 0.00 0.00 78538.04 15371.17 73400.32 00:35:08.807 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0x80000 00:35:08.807 nvme0n3 : 5.05 1671.59 6.53 0.00 0.00 76247.35 15609.48 71017.19 00:35:08.807 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x80000 length 0x80000 00:35:08.807 nvme0n3 : 5.04 1624.58 6.35 0.00 0.00 78435.31 14537.08 75306.82 00:35:08.807 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0x20000 00:35:08.807 nvme1n1 : 5.06 1669.55 6.52 0.00 0.00 76237.51 10962.39 83886.08 00:35:08.807 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x20000 length 0x20000 00:35:08.807 nvme1n1 : 5.03 1628.19 6.36 0.00 0.00 78142.60 13464.67 80549.70 00:35:08.807 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0xbd0bd 00:35:08.807 nvme2n1 : 5.06 2925.92 11.43 0.00 0.00 43382.65 4289.63 74353.57 00:35:08.807 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:35:08.807 nvme2n1 : 5.06 2897.44 11.32 0.00 0.00 43797.91 3932.16 80073.08 00:35:08.807 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0x0 length 0xa0000 00:35:08.807 nvme3n1 : 5.07 1692.90 6.61 0.00 0.00 74882.00 4974.78 73400.32 00:35:08.807 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:08.807 Verification LBA range: start 0xa0000 length 0xa0000 00:35:08.807 nvme3n1 : 5.07 1666.40 6.51 0.00 0.00 76029.62 5153.51 80073.08 00:35:08.807 [2024-12-06T07:01:41.398Z] =================================================================================================================== 00:35:08.807 [2024-12-06T07:01:41.398Z] Total : 22369.55 87.38 0.00 0.00 68272.67 3932.16 83886.08 00:35:09.742 00:35:09.742 real 0m6.730s 00:35:09.742 user 0m10.558s 00:35:09.742 sys 0m1.765s 00:35:09.742 07:01:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.742 ************************************ 00:35:09.742 END TEST bdev_verify 00:35:09.742 ************************************ 00:35:09.742 07:01:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:35:09.742 07:01:42 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:09.742 07:01:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:35:09.742 07:01:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.742 07:01:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:09.742 ************************************ 00:35:09.742 START TEST bdev_verify_big_io 00:35:09.742 ************************************ 00:35:09.742 07:01:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:09.742 [2024-12-06 07:01:42.279025] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:09.742 [2024-12-06 07:01:42.279195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74555 ] 00:35:10.001 [2024-12-06 07:01:42.456700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:10.001 [2024-12-06 07:01:42.543686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.001 [2024-12-06 07:01:42.543694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.568 Running I/O for 5 seconds... 00:35:16.663 2460.00 IOPS, 153.75 MiB/s [2024-12-06T07:01:49.254Z] 4174.50 IOPS, 260.91 MiB/s 00:35:16.663 Latency(us) 00:35:16.663 [2024-12-06T07:01:49.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.663 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x0 length 0x8000 00:35:16.663 nvme0n1 : 5.81 143.25 8.95 0.00 0.00 866042.77 139174.63 1235413.18 00:35:16.663 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x8000 length 0x8000 00:35:16.663 nvme0n1 : 5.82 107.13 6.70 0.00 0.00 1173836.45 34078.72 2059021.96 00:35:16.663 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x0 length 0x8000 00:35:16.663 nvme0n2 : 5.81 93.60 5.85 0.00 0.00 1286805.49 111530.36 1921753.83 00:35:16.663 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x8000 length 0x8000 00:35:16.663 nvme0n2 : 5.82 129.31 8.08 0.00 0.00 921503.03 25618.62 1403185.34 00:35:16.663 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x0 length 0x8000 00:35:16.663 nvme0n3 : 5.81 141.71 8.86 0.00 0.00 835313.08 22163.08 1098145.05 00:35:16.663 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x8000 length 0x8000 00:35:16.663 nvme0n3 : 5.83 128.92 8.06 0.00 0.00 924892.96 14477.50 1868371.78 00:35:16.663 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x0 length 0x2000 00:35:16.663 nvme1n1 : 5.83 153.81 9.61 0.00 0.00 749564.81 30742.34 1151527.10 00:35:16.663 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x2000 length 0x2000 00:35:16.663 nvme1n1 : 5.83 151.02 9.44 0.00 0.00 769426.09 22163.08 1098145.05 00:35:16.663 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.663 Verification LBA range: start 0x0 length 0xbd0b 00:35:16.663 nvme2n1 : 5.83 187.36 11.71 0.00 0.00 605804.73 7983.48 682527.65 00:35:16.664 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.664 Verification LBA range: start 0xbd0b length 0xbd0b 00:35:16.664 nvme2n1 : 5.84 180.95 11.31 0.00 0.00 623895.71 7298.33 1182031.13 00:35:16.664 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:16.664 Verification LBA range: start 0x0 length 0xa000 00:35:16.664 nvme3n1 : 5.83 164.56 10.28 0.00 0.00 672604.22 9234.62 899868.86 00:35:16.664 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:16.664 Verification LBA range: start 0xa000 length 0xa000 00:35:16.664 nvme3n1 : 5.82 141.52 8.84 0.00 0.00 775941.90 7923.90 1570957.50 00:35:16.664 [2024-12-06T07:01:49.255Z] =================================================================================================================== 00:35:16.664 [2024-12-06T07:01:49.255Z] Total : 1723.14 107.70 0.00 0.00 815462.87 7298.33 2059021.96 00:35:17.601 00:35:17.601 real 0m7.781s 00:35:17.601 user 0m14.216s 00:35:17.601 sys 0m0.481s 00:35:17.601 07:01:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.601 ************************************ 00:35:17.601 END TEST bdev_verify_big_io 00:35:17.601 ************************************ 00:35:17.601 07:01:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 07:01:50 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:17.601 07:01:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:17.601 07:01:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.601 07:01:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:17.601 ************************************ 00:35:17.601 START TEST bdev_write_zeroes 00:35:17.601 ************************************ 00:35:17.601 07:01:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:17.601 [2024-12-06 07:01:50.114621] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:17.601 [2024-12-06 07:01:50.114829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74664 ] 00:35:17.860 [2024-12-06 07:01:50.291240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.860 [2024-12-06 07:01:50.371661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.427 Running I/O for 1 seconds... 00:35:19.415 78976.00 IOPS, 308.50 MiB/s 00:35:19.415 Latency(us) 00:35:19.415 [2024-12-06T07:01:52.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.415 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme0n1 : 1.03 11910.43 46.53 0.00 0.00 10735.51 6106.76 37653.41 00:35:19.415 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme0n2 : 1.03 11888.15 46.44 0.00 0.00 10745.29 6136.55 38606.66 00:35:19.415 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme0n3 : 1.04 11865.38 46.35 0.00 0.00 10755.48 6136.55 39321.60 00:35:19.415 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme1n1 : 1.04 11935.97 46.62 0.00 0.00 10680.95 4706.68 31933.91 00:35:19.415 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme2n1 : 1.04 17636.31 68.89 0.00 0.00 7220.40 3872.58 36461.85 00:35:19.415 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:19.415 nvme3n1 : 1.04 11916.20 46.55 0.00 0.00 10617.93 4349.21 38130.04 00:35:19.415 [2024-12-06T07:01:52.006Z] =================================================================================================================== 00:35:19.415 [2024-12-06T07:01:52.006Z] Total : 77152.44 301.38 0.00 0.00 9909.11 3872.58 39321.60 00:35:20.350 00:35:20.350 real 0m2.665s 00:35:20.350 user 0m1.864s 00:35:20.350 sys 0m0.614s 00:35:20.350 07:01:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.350 ************************************ 00:35:20.350 END TEST bdev_write_zeroes 00:35:20.350 ************************************ 00:35:20.350 07:01:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:35:20.350 07:01:52 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:20.350 07:01:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:20.350 07:01:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.350 07:01:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:20.350 ************************************ 00:35:20.350 START TEST bdev_json_nonenclosed 00:35:20.350 ************************************ 00:35:20.350 07:01:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:20.350 [2024-12-06 07:01:52.809523] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:20.350 [2024-12-06 07:01:52.809654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74707 ] 00:35:20.608 [2024-12-06 07:01:52.975219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.608 [2024-12-06 07:01:53.055984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.608 [2024-12-06 07:01:53.056119] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:20.608 [2024-12-06 07:01:53.056186] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:20.608 [2024-12-06 07:01:53.056200] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:20.867 00:35:20.867 real 0m0.538s 00:35:20.867 user 0m0.323s 00:35:20.867 sys 0m0.110s 00:35:20.867 07:01:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.867 07:01:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:35:20.867 ************************************ 00:35:20.867 END TEST bdev_json_nonenclosed 00:35:20.867 ************************************ 00:35:20.867 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:20.867 07:01:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:20.867 07:01:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.867 07:01:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:20.867 ************************************ 00:35:20.867 START TEST bdev_json_nonarray 00:35:20.867 ************************************ 00:35:20.867 07:01:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:20.867 [2024-12-06 07:01:53.422349] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:20.867 [2024-12-06 07:01:53.422512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74733 ] 00:35:21.127 [2024-12-06 07:01:53.600452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.127 [2024-12-06 07:01:53.686337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.127 [2024-12-06 07:01:53.686476] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:21.127 [2024-12-06 07:01:53.686502] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:21.127 [2024-12-06 07:01:53.686516] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:21.387 00:35:21.387 real 0m0.580s 00:35:21.387 user 0m0.348s 00:35:21.387 sys 0m0.128s 00:35:21.387 07:01:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.387 07:01:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:35:21.387 ************************************ 00:35:21.387 END TEST bdev_json_nonarray 00:35:21.387 ************************************ 00:35:21.387 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:35:21.387 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:35:21.388 07:01:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:21.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:26.147 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:26.147 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:26.147 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:26.147 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:26.405 00:35:26.405 real 0m56.710s 00:35:26.406 user 1m35.772s 00:35:26.406 sys 0m33.516s 00:35:26.406 07:01:58 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.406 07:01:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:26.406 ************************************ 00:35:26.406 END TEST blockdev_xnvme 00:35:26.406 ************************************ 00:35:26.406 07:01:58 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:35:26.406 07:01:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:26.406 07:01:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.406 07:01:58 -- common/autotest_common.sh@10 -- # set +x 00:35:26.406 ************************************ 00:35:26.406 START TEST ublk 00:35:26.406 ************************************ 00:35:26.406 07:01:58 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:35:26.406 * Looking for test storage... 00:35:26.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:35:26.406 07:01:58 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:26.406 07:01:58 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:35:26.406 07:01:58 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.665 07:01:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.665 07:01:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.665 07:01:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.665 07:01:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.665 07:01:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.665 07:01:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:35:26.665 07:01:59 ublk -- scripts/common.sh@345 -- # : 1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.665 07:01:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.665 07:01:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@353 -- # local d=1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.665 07:01:59 ublk -- scripts/common.sh@355 -- # echo 1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.665 07:01:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@353 -- # local d=2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.665 07:01:59 ublk -- scripts/common.sh@355 -- # echo 2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.665 07:01:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.665 07:01:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.665 07:01:59 ublk -- scripts/common.sh@368 -- # return 0 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.665 --rc genhtml_branch_coverage=1 00:35:26.665 --rc genhtml_function_coverage=1 00:35:26.665 --rc genhtml_legend=1 00:35:26.665 --rc geninfo_all_blocks=1 00:35:26.665 --rc geninfo_unexecuted_blocks=1 00:35:26.665 00:35:26.665 ' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.665 --rc genhtml_branch_coverage=1 00:35:26.665 --rc genhtml_function_coverage=1 00:35:26.665 --rc genhtml_legend=1 00:35:26.665 --rc geninfo_all_blocks=1 00:35:26.665 --rc geninfo_unexecuted_blocks=1 00:35:26.665 00:35:26.665 ' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.665 --rc genhtml_branch_coverage=1 00:35:26.665 --rc genhtml_function_coverage=1 00:35:26.665 --rc genhtml_legend=1 00:35:26.665 --rc geninfo_all_blocks=1 00:35:26.665 --rc geninfo_unexecuted_blocks=1 00:35:26.665 00:35:26.665 ' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.665 --rc genhtml_branch_coverage=1 00:35:26.665 --rc genhtml_function_coverage=1 00:35:26.665 --rc genhtml_legend=1 00:35:26.665 --rc geninfo_all_blocks=1 00:35:26.665 --rc geninfo_unexecuted_blocks=1 00:35:26.665 00:35:26.665 ' 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:35:26.665 07:01:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:35:26.665 07:01:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:35:26.665 07:01:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:35:26.665 07:01:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:35:26.665 07:01:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:35:26.665 07:01:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:35:26.665 07:01:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:35:26.665 07:01:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:35:26.665 07:01:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.665 07:01:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:26.665 ************************************ 00:35:26.665 START TEST test_save_ublk_config 00:35:26.665 ************************************ 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75043 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75043 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75043 ']' 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.665 07:01:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:26.665 [2024-12-06 07:01:59.163246] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:26.665 [2024-12-06 07:01:59.163949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75043 ] 00:35:26.925 [2024-12-06 07:01:59.336207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.925 [2024-12-06 07:01:59.459674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 [2024-12-06 07:02:00.122789] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:27.861 [2024-12-06 07:02:00.123859] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:27.861 malloc0 00:35:27.861 [2024-12-06 07:02:00.186865] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:35:27.861 [2024-12-06 07:02:00.186985] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:35:27.861 [2024-12-06 07:02:00.187004] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:27.861 [2024-12-06 07:02:00.187013] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:27.861 [2024-12-06 07:02:00.194813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:27.861 [2024-12-06 07:02:00.194839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:27.861 [2024-12-06 07:02:00.200848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:27.861 [2024-12-06 07:02:00.200982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:27.861 [2024-12-06 07:02:00.224818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:27.861 0 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:28.121 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.121 07:02:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:35:28.121 "subsystems": [ 00:35:28.121 { 00:35:28.121 "subsystem": "fsdev", 00:35:28.121 "config": [ 00:35:28.121 { 00:35:28.121 "method": "fsdev_set_opts", 00:35:28.121 "params": { 00:35:28.121 "fsdev_io_pool_size": 65535, 00:35:28.121 "fsdev_io_cache_size": 256 00:35:28.121 } 00:35:28.121 } 00:35:28.121 ] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "keyring", 00:35:28.121 "config": [] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "iobuf", 00:35:28.121 "config": [ 00:35:28.121 { 00:35:28.121 "method": "iobuf_set_options", 00:35:28.121 "params": { 00:35:28.121 "small_pool_count": 8192, 00:35:28.121 "large_pool_count": 1024, 00:35:28.121 "small_bufsize": 8192, 00:35:28.121 "large_bufsize": 135168, 00:35:28.121 "enable_numa": false 00:35:28.121 } 00:35:28.121 } 00:35:28.121 ] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "sock", 00:35:28.121 "config": [ 00:35:28.121 { 00:35:28.121 "method": "sock_set_default_impl", 00:35:28.121 "params": { 00:35:28.121 "impl_name": "posix" 00:35:28.121 } 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "method": "sock_impl_set_options", 00:35:28.121 "params": { 00:35:28.121 "impl_name": "ssl", 00:35:28.121 "recv_buf_size": 4096, 00:35:28.121 "send_buf_size": 4096, 00:35:28.121 "enable_recv_pipe": true, 00:35:28.121 "enable_quickack": false, 00:35:28.121 "enable_placement_id": 0, 00:35:28.121 "enable_zerocopy_send_server": true, 00:35:28.121 "enable_zerocopy_send_client": false, 00:35:28.121 "zerocopy_threshold": 0, 00:35:28.121 "tls_version": 0, 00:35:28.121 "enable_ktls": false 00:35:28.121 } 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "method": "sock_impl_set_options", 00:35:28.121 "params": { 00:35:28.121 "impl_name": "posix", 00:35:28.121 "recv_buf_size": 2097152, 00:35:28.121 "send_buf_size": 2097152, 00:35:28.121 "enable_recv_pipe": true, 00:35:28.121 "enable_quickack": false, 00:35:28.121 "enable_placement_id": 0, 00:35:28.121 "enable_zerocopy_send_server": true, 00:35:28.121 "enable_zerocopy_send_client": false, 00:35:28.121 "zerocopy_threshold": 0, 00:35:28.121 "tls_version": 0, 00:35:28.121 "enable_ktls": false 00:35:28.121 } 00:35:28.121 } 00:35:28.121 ] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "vmd", 00:35:28.121 "config": [] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "accel", 00:35:28.121 "config": [ 00:35:28.121 { 00:35:28.121 "method": "accel_set_options", 00:35:28.121 "params": { 00:35:28.121 "small_cache_size": 128, 00:35:28.121 "large_cache_size": 16, 00:35:28.121 "task_count": 2048, 00:35:28.121 "sequence_count": 2048, 00:35:28.121 "buf_count": 2048 00:35:28.121 } 00:35:28.121 } 00:35:28.121 ] 00:35:28.121 }, 00:35:28.121 { 00:35:28.121 "subsystem": "bdev", 00:35:28.121 "config": [ 00:35:28.121 { 00:35:28.121 "method": "bdev_set_options", 00:35:28.121 "params": { 00:35:28.121 "bdev_io_pool_size": 65535, 00:35:28.121 "bdev_io_cache_size": 256, 00:35:28.122 "bdev_auto_examine": true, 00:35:28.122 "iobuf_small_cache_size": 128, 00:35:28.122 "iobuf_large_cache_size": 16 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_raid_set_options", 00:35:28.122 "params": { 00:35:28.122 "process_window_size_kb": 1024, 00:35:28.122 "process_max_bandwidth_mb_sec": 0 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_iscsi_set_options", 00:35:28.122 "params": { 00:35:28.122 "timeout_sec": 30 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_nvme_set_options", 00:35:28.122 "params": { 00:35:28.122 "action_on_timeout": "none", 00:35:28.122 "timeout_us": 0, 00:35:28.122 "timeout_admin_us": 0, 00:35:28.122 "keep_alive_timeout_ms": 10000, 00:35:28.122 "arbitration_burst": 0, 00:35:28.122 "low_priority_weight": 0, 00:35:28.122 "medium_priority_weight": 0, 00:35:28.122 "high_priority_weight": 0, 00:35:28.122 "nvme_adminq_poll_period_us": 10000, 00:35:28.122 "nvme_ioq_poll_period_us": 0, 00:35:28.122 "io_queue_requests": 0, 00:35:28.122 "delay_cmd_submit": true, 00:35:28.122 "transport_retry_count": 4, 00:35:28.122 "bdev_retry_count": 3, 00:35:28.122 "transport_ack_timeout": 0, 00:35:28.122 "ctrlr_loss_timeout_sec": 0, 00:35:28.122 "reconnect_delay_sec": 0, 00:35:28.122 "fast_io_fail_timeout_sec": 0, 00:35:28.122 "disable_auto_failback": false, 00:35:28.122 "generate_uuids": false, 00:35:28.122 "transport_tos": 0, 00:35:28.122 "nvme_error_stat": false, 00:35:28.122 "rdma_srq_size": 0, 00:35:28.122 "io_path_stat": false, 00:35:28.122 "allow_accel_sequence": false, 00:35:28.122 "rdma_max_cq_size": 0, 00:35:28.122 "rdma_cm_event_timeout_ms": 0, 00:35:28.122 "dhchap_digests": [ 00:35:28.122 "sha256", 00:35:28.122 "sha384", 00:35:28.122 "sha512" 00:35:28.122 ], 00:35:28.122 "dhchap_dhgroups": [ 00:35:28.122 "null", 00:35:28.122 "ffdhe2048", 00:35:28.122 "ffdhe3072", 00:35:28.122 "ffdhe4096", 00:35:28.122 "ffdhe6144", 00:35:28.122 "ffdhe8192" 00:35:28.122 ] 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_nvme_set_hotplug", 00:35:28.122 "params": { 00:35:28.122 "period_us": 100000, 00:35:28.122 "enable": false 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_malloc_create", 00:35:28.122 "params": { 00:35:28.122 "name": "malloc0", 00:35:28.122 "num_blocks": 8192, 00:35:28.122 "block_size": 4096, 00:35:28.122 "physical_block_size": 4096, 00:35:28.122 "uuid": "d4d8e1d3-6f35-4400-a21f-35a90410349c", 00:35:28.122 "optimal_io_boundary": 0, 00:35:28.122 "md_size": 0, 00:35:28.122 "dif_type": 0, 00:35:28.122 "dif_is_head_of_md": false, 00:35:28.122 "dif_pi_format": 0 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "bdev_wait_for_examine" 00:35:28.122 } 00:35:28.122 ] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "scsi", 00:35:28.122 "config": null 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "scheduler", 00:35:28.122 "config": [ 00:35:28.122 { 00:35:28.122 "method": "framework_set_scheduler", 00:35:28.122 "params": { 00:35:28.122 "name": "static" 00:35:28.122 } 00:35:28.122 } 00:35:28.122 ] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "vhost_scsi", 00:35:28.122 "config": [] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "vhost_blk", 00:35:28.122 "config": [] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "ublk", 00:35:28.122 "config": [ 00:35:28.122 { 00:35:28.122 "method": "ublk_create_target", 00:35:28.122 "params": { 00:35:28.122 "cpumask": "1" 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "ublk_start_disk", 00:35:28.122 "params": { 00:35:28.122 "bdev_name": "malloc0", 00:35:28.122 "ublk_id": 0, 00:35:28.122 "num_queues": 1, 00:35:28.122 "queue_depth": 128 00:35:28.122 } 00:35:28.122 } 00:35:28.122 ] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "nbd", 00:35:28.122 "config": [] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "nvmf", 00:35:28.122 "config": [ 00:35:28.122 { 00:35:28.122 "method": "nvmf_set_config", 00:35:28.122 "params": { 00:35:28.122 "discovery_filter": "match_any", 00:35:28.122 "admin_cmd_passthru": { 00:35:28.122 "identify_ctrlr": false 00:35:28.122 }, 00:35:28.122 "dhchap_digests": [ 00:35:28.122 "sha256", 00:35:28.122 "sha384", 00:35:28.122 "sha512" 00:35:28.122 ], 00:35:28.122 "dhchap_dhgroups": [ 00:35:28.122 "null", 00:35:28.122 "ffdhe2048", 00:35:28.122 "ffdhe3072", 00:35:28.122 "ffdhe4096", 00:35:28.122 "ffdhe6144", 00:35:28.122 "ffdhe8192" 00:35:28.122 ] 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "nvmf_set_max_subsystems", 00:35:28.122 "params": { 00:35:28.122 "max_subsystems": 1024 00:35:28.122 } 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "method": "nvmf_set_crdt", 00:35:28.122 "params": { 00:35:28.122 "crdt1": 0, 00:35:28.122 "crdt2": 0, 00:35:28.122 "crdt3": 0 00:35:28.122 } 00:35:28.122 } 00:35:28.122 ] 00:35:28.122 }, 00:35:28.122 { 00:35:28.122 "subsystem": "iscsi", 00:35:28.122 "config": [ 00:35:28.122 { 00:35:28.122 "method": "iscsi_set_options", 00:35:28.122 "params": { 00:35:28.122 "node_base": "iqn.2016-06.io.spdk", 00:35:28.122 "max_sessions": 128, 00:35:28.122 "max_connections_per_session": 2, 00:35:28.122 "max_queue_depth": 64, 00:35:28.122 "default_time2wait": 2, 00:35:28.123 "default_time2retain": 20, 00:35:28.123 "first_burst_length": 8192, 00:35:28.123 "immediate_data": true, 00:35:28.123 "allow_duplicated_isid": false, 00:35:28.123 "error_recovery_level": 0, 00:35:28.123 "nop_timeout": 60, 00:35:28.123 "nop_in_interval": 30, 00:35:28.123 "disable_chap": false, 00:35:28.123 "require_chap": false, 00:35:28.123 "mutual_chap": false, 00:35:28.123 "chap_group": 0, 00:35:28.123 "max_large_datain_per_connection": 64, 00:35:28.123 "max_r2t_per_connection": 4, 00:35:28.123 "pdu_pool_size": 36864, 00:35:28.123 "immediate_data_pool_size": 16384, 00:35:28.123 "data_out_pool_size": 2048 00:35:28.123 } 00:35:28.123 } 00:35:28.123 ] 00:35:28.123 } 00:35:28.123 ] 00:35:28.123 }' 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75043 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75043 ']' 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75043 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75043 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.123 killing process with pid 75043 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75043' 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75043 00:35:28.123 07:02:00 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75043 00:35:29.059 [2024-12-06 07:02:01.614005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:29.318 [2024-12-06 07:02:01.653778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:29.318 [2024-12-06 07:02:01.653927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:29.318 [2024-12-06 07:02:01.660839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:29.318 [2024-12-06 07:02:01.660932] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:29.318 [2024-12-06 07:02:01.660953] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:29.318 [2024-12-06 07:02:01.661002] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:29.318 [2024-12-06 07:02:01.661212] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:30.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75098 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75098 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75098 ']' 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.696 07:02:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:35:30.696 "subsystems": [ 00:35:30.696 { 00:35:30.696 "subsystem": "fsdev", 00:35:30.696 "config": [ 00:35:30.696 { 00:35:30.696 "method": "fsdev_set_opts", 00:35:30.697 "params": { 00:35:30.697 "fsdev_io_pool_size": 65535, 00:35:30.697 "fsdev_io_cache_size": 256 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "keyring", 00:35:30.697 "config": [] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "iobuf", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "iobuf_set_options", 00:35:30.697 "params": { 00:35:30.697 "small_pool_count": 8192, 00:35:30.697 "large_pool_count": 1024, 00:35:30.697 "small_bufsize": 8192, 00:35:30.697 "large_bufsize": 135168, 00:35:30.697 "enable_numa": false 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "sock", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "sock_set_default_impl", 00:35:30.697 "params": { 00:35:30.697 "impl_name": "posix" 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "sock_impl_set_options", 00:35:30.697 "params": { 00:35:30.697 "impl_name": "ssl", 00:35:30.697 "recv_buf_size": 4096, 00:35:30.697 "send_buf_size": 4096, 00:35:30.697 "enable_recv_pipe": true, 00:35:30.697 "enable_quickack": false, 00:35:30.697 "enable_placement_id": 0, 00:35:30.697 "enable_zerocopy_send_server": true, 00:35:30.697 "enable_zerocopy_send_client": false, 00:35:30.697 "zerocopy_threshold": 0, 00:35:30.697 "tls_version": 0, 00:35:30.697 "enable_ktls": false 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "sock_impl_set_options", 00:35:30.697 "params": { 00:35:30.697 "impl_name": "posix", 00:35:30.697 "recv_buf_size": 2097152, 00:35:30.697 "send_buf_size": 2097152, 00:35:30.697 "enable_recv_pipe": true, 00:35:30.697 "enable_quickack": false, 00:35:30.697 "enable_placement_id": 0, 00:35:30.697 "enable_zerocopy_send_server": true, 00:35:30.697 "enable_zerocopy_send_client": false, 00:35:30.697 "zerocopy_threshold": 0, 00:35:30.697 "tls_version": 0, 00:35:30.697 "enable_ktls": false 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "vmd", 00:35:30.697 "config": [] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "accel", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "accel_set_options", 00:35:30.697 "params": { 00:35:30.697 "small_cache_size": 128, 00:35:30.697 "large_cache_size": 16, 00:35:30.697 "task_count": 2048, 00:35:30.697 "sequence_count": 2048, 00:35:30.697 "buf_count": 2048 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "bdev", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "bdev_set_options", 00:35:30.697 "params": { 00:35:30.697 "bdev_io_pool_size": 65535, 00:35:30.697 "bdev_io_cache_size": 256, 00:35:30.697 "bdev_auto_examine": true, 00:35:30.697 "iobuf_small_cache_size": 128, 00:35:30.697 "iobuf_large_cache_size": 16 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_raid_set_options", 00:35:30.697 "params": { 00:35:30.697 "process_window_size_kb": 1024, 00:35:30.697 "process_max_bandwidth_mb_sec": 0 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_iscsi_set_options", 00:35:30.697 "params": { 00:35:30.697 "timeout_sec": 30 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_nvme_set_options", 00:35:30.697 "params": { 00:35:30.697 "action_on_timeout": "none", 00:35:30.697 "timeout_us": 0, 00:35:30.697 "timeout_admin_us": 0, 00:35:30.697 "keep_alive_timeout_ms": 10000, 00:35:30.697 "arbitration_burst": 0, 00:35:30.697 "low_priority_weight": 0, 00:35:30.697 "medium_priority_weight": 0, 00:35:30.697 "high_priority_weight": 0, 00:35:30.697 "nvme_adminq_poll_period_us": 10000, 00:35:30.697 "nvme_ioq_poll_period_us": 0, 00:35:30.697 "io_queue_requests": 0, 00:35:30.697 "delay_cmd_submit": true, 00:35:30.697 "transport_retry_count": 4, 00:35:30.697 "bdev_retry_count": 3, 00:35:30.697 "transport_ack_timeout": 0, 00:35:30.697 "ctrlr_loss_timeout_sec": 0, 00:35:30.697 "reconnect_delay_sec": 0, 00:35:30.697 "fast_io_fail_timeout_sec": 0, 00:35:30.697 "disable_auto_failback": false, 00:35:30.697 "generate_uuids": false, 00:35:30.697 "transport_tos": 0, 00:35:30.697 "nvme_error_stat": false, 00:35:30.697 "rdma_srq_size": 0, 00:35:30.697 "io_path_stat": false, 00:35:30.697 "allow_accel_sequence": false, 00:35:30.697 "rdma_max_cq_size": 0, 00:35:30.697 "rdma_cm_event_timeout_ms": 0, 00:35:30.697 "dhchap_digests": [ 00:35:30.697 "sha256", 00:35:30.697 "sha384", 00:35:30.697 "sha512" 00:35:30.697 ], 00:35:30.697 "dhchap_dhgroups": [ 00:35:30.697 "null", 00:35:30.697 "ffdhe2048", 00:35:30.697 "ffdhe3072", 00:35:30.697 "ffdhe4096", 00:35:30.697 "ffdhe6144", 00:35:30.697 "ffdhe8192" 00:35:30.697 ] 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_nvme_set_hotplug", 00:35:30.697 "params": { 00:35:30.697 "period_us": 100000, 00:35:30.697 "enable": false 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_malloc_create", 00:35:30.697 "params": { 00:35:30.697 "name": "malloc0", 00:35:30.697 "num_blocks": 8192, 00:35:30.697 "block_size": 4096, 00:35:30.697 "physical_block_size": 4096, 00:35:30.697 "uuid": "d4d8e1d3-6f35-4400-a21f-35a90410349c", 00:35:30.697 "optimal_io_boundary": 0, 00:35:30.697 "md_size": 0, 00:35:30.697 "dif_type": 0, 00:35:30.697 "dif_is_head_of_md": false, 00:35:30.697 "dif_pi_format": 0 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "bdev_wait_for_examine" 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "scsi", 00:35:30.697 "config": null 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "scheduler", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "framework_set_scheduler", 00:35:30.697 "params": { 00:35:30.697 "name": "static" 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "vhost_scsi", 00:35:30.697 "config": [] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "vhost_blk", 00:35:30.697 "config": [] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "ublk", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "ublk_create_target", 00:35:30.697 "params": { 00:35:30.697 "cpumask": "1" 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "ublk_start_disk", 00:35:30.697 "params": { 00:35:30.697 "bdev_name": "malloc0", 00:35:30.697 "ublk_id": 0, 00:35:30.697 "num_queues": 1, 00:35:30.697 "queue_depth": 128 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "nbd", 00:35:30.697 "config": [] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "nvmf", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "nvmf_set_config", 00:35:30.697 "params": { 00:35:30.697 "discovery_filter": "match_any", 00:35:30.697 "admin_cmd_passthru": { 00:35:30.697 "identify_ctrlr": false 00:35:30.697 }, 00:35:30.697 "dhchap_digests": [ 00:35:30.697 "sha256", 00:35:30.697 "sha384", 00:35:30.697 "sha512" 00:35:30.697 ], 00:35:30.697 "dhchap_dhgroups": [ 00:35:30.697 "null", 00:35:30.697 "ffdhe2048", 00:35:30.697 "ffdhe3072", 00:35:30.697 "ffdhe4096", 00:35:30.697 "ffdhe6144", 00:35:30.697 "ffdhe8192" 00:35:30.697 ] 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "nvmf_set_max_subsystems", 00:35:30.697 "params": { 00:35:30.697 "max_subsystems": 1024 00:35:30.697 } 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "method": "nvmf_set_crdt", 00:35:30.697 "params": { 00:35:30.697 "crdt1": 0, 00:35:30.697 "crdt2": 0, 00:35:30.697 "crdt3": 0 00:35:30.697 } 00:35:30.697 } 00:35:30.697 ] 00:35:30.697 }, 00:35:30.697 { 00:35:30.697 "subsystem": "iscsi", 00:35:30.697 "config": [ 00:35:30.697 { 00:35:30.697 "method": "iscsi_set_options", 00:35:30.697 "params": { 00:35:30.697 "node_base": "iqn.2016-06.io.spdk", 00:35:30.697 "max_sessions": 128, 00:35:30.697 "max_connections_per_session": 2, 00:35:30.697 "max_queue_depth": 64, 00:35:30.697 "default_time2wait": 2, 00:35:30.697 "default_time2retain": 20, 00:35:30.697 "first_burst_length": 8192, 00:35:30.697 "immediate_data": true, 00:35:30.697 "allow_duplicated_isid": false, 00:35:30.698 "error_recovery_level": 0, 00:35:30.698 "nop_timeout": 60, 00:35:30.698 "nop_in_interval": 30, 00:35:30.698 "disable_chap": false, 00:35:30.698 "require_chap": false, 00:35:30.698 "mutual_chap": false, 00:35:30.698 "chap_group": 0, 00:35:30.698 "max_large_datain_per_connection": 64, 00:35:30.698 "max_r2t_per_connection": 4, 00:35:30.698 "pdu_pool_size": 36864, 00:35:30.698 "immediate_data_pool_size": 16384, 00:35:30.698 "data_out_pool_size": 2048 00:35:30.698 } 00:35:30.698 } 00:35:30.698 ] 00:35:30.698 } 00:35:30.698 ] 00:35:30.698 }' 00:35:30.698 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.698 07:02:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:30.698 [2024-12-06 07:02:03.233321] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:30.698 [2024-12-06 07:02:03.233503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75098 ] 00:35:30.957 [2024-12-06 07:02:03.408988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.957 [2024-12-06 07:02:03.493609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.902 [2024-12-06 07:02:04.270727] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:31.902 [2024-12-06 07:02:04.271696] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:31.902 [2024-12-06 07:02:04.278871] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:35:31.902 [2024-12-06 07:02:04.278972] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:35:31.902 [2024-12-06 07:02:04.278988] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:31.902 [2024-12-06 07:02:04.278995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:31.902 [2024-12-06 07:02:04.287811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:31.902 [2024-12-06 07:02:04.287837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:31.902 [2024-12-06 07:02:04.294850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:31.902 [2024-12-06 07:02:04.295008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:31.902 [2024-12-06 07:02:04.310835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:31.902 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.902 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:35:31.902 07:02:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:35:31.902 07:02:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75098 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75098 ']' 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75098 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75098 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.903 killing process with pid 75098 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75098' 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75098 00:35:31.903 07:02:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75098 00:35:33.278 [2024-12-06 07:02:05.611435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:33.278 [2024-12-06 07:02:05.645794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:33.278 [2024-12-06 07:02:05.645937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:33.278 [2024-12-06 07:02:05.652786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:33.278 [2024-12-06 07:02:05.652861] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:33.278 [2024-12-06 07:02:05.652874] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:33.278 [2024-12-06 07:02:05.652903] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:33.278 [2024-12-06 07:02:05.653096] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:34.653 07:02:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:35:34.653 00:35:34.653 real 0m8.067s 00:35:34.653 user 0m6.096s 00:35:34.653 sys 0m2.883s 00:35:34.653 07:02:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.653 07:02:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:34.653 ************************************ 00:35:34.654 END TEST test_save_ublk_config 00:35:34.654 ************************************ 00:35:34.654 07:02:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75176 00:35:34.654 07:02:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:35:34.654 07:02:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:34.654 07:02:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75176 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 75176 ']' 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.654 07:02:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:34.912 [2024-12-06 07:02:07.277322] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:34.912 [2024-12-06 07:02:07.277493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:35:34.912 [2024-12-06 07:02:07.456664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:35.170 [2024-12-06 07:02:07.537935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.170 [2024-12-06 07:02:07.537950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.736 07:02:08 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.736 07:02:08 ublk -- common/autotest_common.sh@868 -- # return 0 00:35:35.736 07:02:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:35:35.736 07:02:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:35.736 07:02:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.736 07:02:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.736 ************************************ 00:35:35.736 START TEST test_create_ublk 00:35:35.736 ************************************ 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:35:35.736 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.736 [2024-12-06 07:02:08.249806] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:35.736 [2024-12-06 07:02:08.252323] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.736 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:35:35.736 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.736 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.995 [2024-12-06 07:02:08.474951] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:35:35.995 [2024-12-06 07:02:08.475459] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:35:35.995 [2024-12-06 07:02:08.475484] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:35.995 [2024-12-06 07:02:08.475493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:35.995 [2024-12-06 07:02:08.483994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:35.995 [2024-12-06 07:02:08.484020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:35.995 [2024-12-06 07:02:08.490765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:35.995 [2024-12-06 07:02:08.491474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:35.995 [2024-12-06 07:02:08.506781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.995 07:02:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:35:35.995 { 00:35:35.995 "ublk_device": "/dev/ublkb0", 00:35:35.995 "id": 0, 00:35:35.995 "queue_depth": 512, 00:35:35.995 "num_queues": 4, 00:35:35.995 "bdev_name": "Malloc0" 00:35:35.995 } 00:35:35.995 ]' 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:35.995 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:35:36.253 07:02:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:35:36.253 07:02:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:35:36.512 fio: verification read phase will never start because write phase uses all of runtime 00:35:36.512 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:35:36.512 fio-3.35 00:35:36.512 Starting 1 process 00:35:46.518 00:35:46.518 fio_test: (groupid=0, jobs=1): err= 0: pid=75217: Fri Dec 6 07:02:19 2024 00:35:46.518 write: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(543MiB/10001msec); 0 zone resets 00:35:46.518 clat (usec): min=43, max=3966, avg=70.85, stdev=120.53 00:35:46.518 lat (usec): min=43, max=3967, avg=71.43, stdev=120.55 00:35:46.518 clat percentiles (usec): 00:35:46.518 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 60], 00:35:46.518 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 63], 00:35:46.518 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 88], 00:35:46.518 | 99.00th=[ 110], 99.50th=[ 124], 99.90th=[ 2606], 99.95th=[ 3064], 00:35:46.518 | 99.99th=[ 3687] 00:35:46.518 bw ( KiB/s): min=53648, max=60416, per=100.00%, avg=55880.42, stdev=2138.84, samples=19 00:35:46.518 iops : min=13412, max=15104, avg=13970.11, stdev=534.71, samples=19 00:35:46.518 lat (usec) : 50=6.60%, 100=91.44%, 250=1.67%, 500=0.01%, 750=0.01% 00:35:46.518 lat (usec) : 1000=0.02% 00:35:46.518 lat (msec) : 2=0.10%, 4=0.15% 00:35:46.518 cpu : usr=2.99%, sys=8.16%, ctx=139103, majf=0, minf=796 00:35:46.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.518 issued rwts: total=0,139104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:46.518 00:35:46.518 Run status group 0 (all jobs): 00:35:46.518 WRITE: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=543MiB (570MB), run=10001-10001msec 00:35:46.518 00:35:46.518 Disk stats (read/write): 00:35:46.518 ublkb0: ios=0/137757, merge=0/0, ticks=0/8872, in_queue=8872, util=99.09% 00:35:46.519 07:02:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:46.519 [2024-12-06 07:02:19.025278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:46.519 [2024-12-06 07:02:19.065319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:46.519 [2024-12-06 07:02:19.066383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:46.519 [2024-12-06 07:02:19.076783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:46.519 [2024-12-06 07:02:19.077156] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:46.519 [2024-12-06 07:02:19.077202] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.519 07:02:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:46.519 [2024-12-06 07:02:19.090866] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:35:46.519 request: 00:35:46.519 { 00:35:46.519 "ublk_id": 0, 00:35:46.519 "method": "ublk_stop_disk", 00:35:46.519 "req_id": 1 00:35:46.519 } 00:35:46.519 Got JSON-RPC error response 00:35:46.519 response: 00:35:46.519 { 00:35:46.519 "code": -19, 00:35:46.519 "message": "No such device" 00:35:46.519 } 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:46.519 07:02:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.519 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:46.519 [2024-12-06 07:02:19.104963] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:46.777 [2024-12-06 07:02:19.111854] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:46.777 [2024-12-06 07:02:19.111910] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:35:46.777 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.777 07:02:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:35:46.777 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.777 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.036 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.036 07:02:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:35:47.036 07:02:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:35:47.036 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.036 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.036 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.036 07:02:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:35:47.036 07:02:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:35:47.295 07:02:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:35:47.295 07:02:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:35:47.295 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.295 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.295 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.295 07:02:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:35:47.295 07:02:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:35:47.295 07:02:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:35:47.295 00:35:47.295 real 0m11.502s 00:35:47.295 user 0m0.751s 00:35:47.295 sys 0m0.918s 00:35:47.295 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:47.295 07:02:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.295 ************************************ 00:35:47.295 END TEST test_create_ublk 00:35:47.295 ************************************ 00:35:47.295 07:02:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:35:47.295 07:02:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.295 07:02:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.295 07:02:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.295 ************************************ 00:35:47.295 START TEST test_create_multi_ublk 00:35:47.295 ************************************ 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.295 [2024-12-06 07:02:19.810793] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:47.295 [2024-12-06 07:02:19.813117] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.295 07:02:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.553 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.553 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:35:47.553 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:35:47.553 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.553 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.553 [2024-12-06 07:02:20.111000] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:35:47.553 [2024-12-06 07:02:20.111495] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:35:47.553 [2024-12-06 07:02:20.111517] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:47.553 [2024-12-06 07:02:20.111530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:47.553 [2024-12-06 07:02:20.119969] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:47.553 [2024-12-06 07:02:20.120002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:47.553 [2024-12-06 07:02:20.129854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:47.553 [2024-12-06 07:02:20.130728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:47.553 [2024-12-06 07:02:20.139349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.813 [2024-12-06 07:02:20.363935] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:35:47.813 [2024-12-06 07:02:20.364549] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:35:47.813 [2024-12-06 07:02:20.364574] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:35:47.813 [2024-12-06 07:02:20.364583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:35:47.813 [2024-12-06 07:02:20.371914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:47.813 [2024-12-06 07:02:20.371941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:47.813 [2024-12-06 07:02:20.378775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:47.813 [2024-12-06 07:02:20.379536] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:35:47.813 [2024-12-06 07:02:20.387868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.813 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.071 [2024-12-06 07:02:20.600948] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:35:48.071 [2024-12-06 07:02:20.601465] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:35:48.071 [2024-12-06 07:02:20.601487] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:35:48.071 [2024-12-06 07:02:20.601499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:35:48.071 [2024-12-06 07:02:20.608907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:48.071 [2024-12-06 07:02:20.608955] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:48.071 [2024-12-06 07:02:20.616870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:48.071 [2024-12-06 07:02:20.617633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:35:48.071 [2024-12-06 07:02:20.640836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.071 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.331 [2024-12-06 07:02:20.858915] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:35:48.331 [2024-12-06 07:02:20.859419] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:35:48.331 [2024-12-06 07:02:20.859443] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:35:48.331 [2024-12-06 07:02:20.859453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:35:48.331 [2024-12-06 07:02:20.866890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:48.331 [2024-12-06 07:02:20.866917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:48.331 [2024-12-06 07:02:20.874798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:48.331 [2024-12-06 07:02:20.875506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:35:48.331 [2024-12-06 07:02:20.878671] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:35:48.331 { 00:35:48.331 "ublk_device": "/dev/ublkb0", 00:35:48.331 "id": 0, 00:35:48.331 "queue_depth": 512, 00:35:48.331 "num_queues": 4, 00:35:48.331 "bdev_name": "Malloc0" 00:35:48.331 }, 00:35:48.331 { 00:35:48.331 "ublk_device": "/dev/ublkb1", 00:35:48.331 "id": 1, 00:35:48.331 "queue_depth": 512, 00:35:48.331 "num_queues": 4, 00:35:48.331 "bdev_name": "Malloc1" 00:35:48.331 }, 00:35:48.331 { 00:35:48.331 "ublk_device": "/dev/ublkb2", 00:35:48.331 "id": 2, 00:35:48.331 "queue_depth": 512, 00:35:48.331 "num_queues": 4, 00:35:48.331 "bdev_name": "Malloc2" 00:35:48.331 }, 00:35:48.331 { 00:35:48.331 "ublk_device": "/dev/ublkb3", 00:35:48.331 "id": 3, 00:35:48.331 "queue_depth": 512, 00:35:48.331 "num_queues": 4, 00:35:48.331 "bdev_name": "Malloc3" 00:35:48.331 } 00:35:48.331 ]' 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:48.331 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:35:48.590 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:48.590 07:02:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:48.590 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:48.849 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.107 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.366 07:02:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.366 [2024-12-06 07:02:21.949036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:49.625 [2024-12-06 07:02:21.989744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:49.625 [2024-12-06 07:02:21.990798] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:49.625 [2024-12-06 07:02:21.997856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:49.625 [2024-12-06 07:02:21.998226] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:49.625 [2024-12-06 07:02:21.998250] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.625 [2024-12-06 07:02:22.012920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:35:49.625 [2024-12-06 07:02:22.044859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:49.625 [2024-12-06 07:02:22.045895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:35:49.625 [2024-12-06 07:02:22.053914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:49.625 [2024-12-06 07:02:22.054284] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:35:49.625 [2024-12-06 07:02:22.054324] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.625 [2024-12-06 07:02:22.070849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:35:49.625 [2024-12-06 07:02:22.099784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:49.625 [2024-12-06 07:02:22.100784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:35:49.625 [2024-12-06 07:02:22.106907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:49.625 [2024-12-06 07:02:22.107250] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:35:49.625 [2024-12-06 07:02:22.107274] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.625 [2024-12-06 07:02:22.121835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:35:49.625 [2024-12-06 07:02:22.152852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:49.625 [2024-12-06 07:02:22.153699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:35:49.625 [2024-12-06 07:02:22.160843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:49.625 [2024-12-06 07:02:22.161185] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:35:49.625 [2024-12-06 07:02:22.161209] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.625 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:35:49.884 [2024-12-06 07:02:22.443898] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:49.884 [2024-12-06 07:02:22.450854] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:49.884 [2024-12-06 07:02:22.450896] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:35:49.884 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:35:49.884 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.884 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:35:49.884 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.884 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:50.451 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.451 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:50.451 07:02:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:35:50.451 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.451 07:02:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:35:51.277 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:35:51.536 ************************************ 00:35:51.536 END TEST test_create_multi_ublk 00:35:51.536 ************************************ 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:35:51.536 00:35:51.536 real 0m4.137s 00:35:51.536 user 0m1.345s 00:35:51.536 sys 0m0.159s 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.536 07:02:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.536 07:02:23 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:51.536 07:02:23 ublk -- ublk/ublk.sh@147 -- # cleanup 00:35:51.536 07:02:23 ublk -- ublk/ublk.sh@130 -- # killprocess 75176 00:35:51.536 07:02:23 ublk -- common/autotest_common.sh@954 -- # '[' -z 75176 ']' 00:35:51.536 07:02:23 ublk -- common/autotest_common.sh@958 -- # kill -0 75176 00:35:51.536 07:02:23 ublk -- common/autotest_common.sh@959 -- # uname 00:35:51.536 07:02:23 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.536 07:02:23 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75176 00:35:51.536 killing process with pid 75176 00:35:51.536 07:02:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:51.536 07:02:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:51.536 07:02:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75176' 00:35:51.536 07:02:24 ublk -- common/autotest_common.sh@973 -- # kill 75176 00:35:51.536 07:02:24 ublk -- common/autotest_common.sh@978 -- # wait 75176 00:35:52.472 [2024-12-06 07:02:24.769801] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:52.472 [2024-12-06 07:02:24.769859] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:53.411 00:35:53.412 real 0m26.863s 00:35:53.412 user 0m39.017s 00:35:53.412 sys 0m10.267s 00:35:53.412 07:02:25 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.412 ************************************ 00:35:53.412 07:02:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:53.412 END TEST ublk 00:35:53.412 ************************************ 00:35:53.412 07:02:25 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:35:53.412 07:02:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:53.412 07:02:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.412 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:35:53.412 ************************************ 00:35:53.412 START TEST ublk_recovery 00:35:53.412 ************************************ 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:35:53.412 * Looking for test storage... 00:35:53.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.412 07:02:25 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.412 --rc genhtml_branch_coverage=1 00:35:53.412 --rc genhtml_function_coverage=1 00:35:53.412 --rc genhtml_legend=1 00:35:53.412 --rc geninfo_all_blocks=1 00:35:53.412 --rc geninfo_unexecuted_blocks=1 00:35:53.412 00:35:53.412 ' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.412 --rc genhtml_branch_coverage=1 00:35:53.412 --rc genhtml_function_coverage=1 00:35:53.412 --rc genhtml_legend=1 00:35:53.412 --rc geninfo_all_blocks=1 00:35:53.412 --rc geninfo_unexecuted_blocks=1 00:35:53.412 00:35:53.412 ' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.412 --rc genhtml_branch_coverage=1 00:35:53.412 --rc genhtml_function_coverage=1 00:35:53.412 --rc genhtml_legend=1 00:35:53.412 --rc geninfo_all_blocks=1 00:35:53.412 --rc geninfo_unexecuted_blocks=1 00:35:53.412 00:35:53.412 ' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.412 --rc genhtml_branch_coverage=1 00:35:53.412 --rc genhtml_function_coverage=1 00:35:53.412 --rc genhtml_legend=1 00:35:53.412 --rc geninfo_all_blocks=1 00:35:53.412 --rc geninfo_unexecuted_blocks=1 00:35:53.412 00:35:53.412 ' 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:35:53.412 07:02:25 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75578 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:53.412 07:02:25 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75578 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75578 ']' 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.412 07:02:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:53.671 [2024-12-06 07:02:26.062215] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:35:53.671 [2024-12-06 07:02:26.062570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75578 ] 00:35:53.671 [2024-12-06 07:02:26.242887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:53.930 [2024-12-06 07:02:26.332633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.930 [2024-12-06 07:02:26.332653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:35:54.497 07:02:26 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:54.497 [2024-12-06 07:02:26.988864] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:54.497 [2024-12-06 07:02:26.991171] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.497 07:02:26 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.497 07:02:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:54.755 malloc0 00:35:54.755 07:02:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.755 07:02:27 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:35:54.755 07:02:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.755 07:02:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:54.755 [2024-12-06 07:02:27.098947] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:35:54.755 [2024-12-06 07:02:27.099092] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:35:54.755 [2024-12-06 07:02:27.099111] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:35:54.755 [2024-12-06 07:02:27.099120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:35:54.755 [2024-12-06 07:02:27.110727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:54.755 [2024-12-06 07:02:27.110753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:54.755 [2024-12-06 07:02:27.121830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:54.755 [2024-12-06 07:02:27.122025] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:35:54.755 [2024-12-06 07:02:27.137856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:35:54.755 1 00:35:54.755 07:02:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.755 07:02:27 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:35:55.691 07:02:28 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75612 00:35:55.691 07:02:28 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:35:55.691 07:02:28 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:35:55.691 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:55.691 fio-3.35 00:35:55.691 Starting 1 process 00:36:00.959 07:02:33 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75578 00:36:00.959 07:02:33 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:36:06.247 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75578 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:36:06.247 07:02:38 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75723 00:36:06.247 07:02:38 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:36:06.247 07:02:38 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:06.247 07:02:38 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75723 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75723 ']' 00:36:06.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.247 07:02:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.247 [2024-12-06 07:02:38.287553] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:36:06.247 [2024-12-06 07:02:38.287785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75723 ] 00:36:06.247 [2024-12-06 07:02:38.475610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:06.247 [2024-12-06 07:02:38.601661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.247 [2024-12-06 07:02:38.601679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:36:06.813 07:02:39 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.813 [2024-12-06 07:02:39.279837] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:06.813 [2024-12-06 07:02:39.282332] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.813 07:02:39 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.813 malloc0 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.813 07:02:39 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.813 [2024-12-06 07:02:39.395953] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:36:06.813 [2024-12-06 07:02:39.396018] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:36:06.813 [2024-12-06 07:02:39.396047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:36:06.813 1 00:36:06.813 07:02:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.813 07:02:39 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75612 00:36:07.071 [2024-12-06 07:02:39.406854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:36:07.071 [2024-12-06 07:02:39.406915] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:36:07.071 [2024-12-06 07:02:39.406927] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:36:07.071 [2024-12-06 07:02:39.407029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:36:07.071 [2024-12-06 07:02:39.414795] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:36:07.071 [2024-12-06 07:02:39.422278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:36:07.071 [2024-12-06 07:02:39.429027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:36:07.071 [2024-12-06 07:02:39.429089] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:37:03.338 00:37:03.338 fio_test: (groupid=0, jobs=1): err= 0: pid=75615: Fri Dec 6 07:03:28 2024 00:37:03.338 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(4917MiB/60002msec) 00:37:03.338 slat (usec): min=2, max=470, avg= 5.81, stdev= 2.62 00:37:03.338 clat (usec): min=1173, max=6286.6k, avg=2975.27, stdev=43372.37 00:37:03.338 lat (usec): min=1178, max=6286.7k, avg=2981.08, stdev=43372.37 00:37:03.338 clat percentiles (usec): 00:37:03.338 | 1.00th=[ 2212], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2442], 00:37:03.338 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:37:03.338 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 3589], 00:37:03.338 | 99.00th=[ 5276], 99.50th=[ 6063], 99.90th=[ 7373], 99.95th=[ 8455], 00:37:03.338 | 99.99th=[13304] 00:37:03.338 bw ( KiB/s): min=43016, max=98320, per=100.00%, avg=93314.62, stdev=8338.39, samples=107 00:37:03.338 iops : min=10754, max=24580, avg=23328.64, stdev=2084.60, samples=107 00:37:03.338 write: IOPS=21.0k, BW=81.9MiB/s (85.8MB/s)(4912MiB/60002msec); 0 zone resets 00:37:03.338 slat (usec): min=2, max=215, avg= 5.99, stdev= 2.66 00:37:03.338 clat (usec): min=994, max=6286.8k, avg=3115.75, stdev=46194.54 00:37:03.338 lat (usec): min=1001, max=6286.8k, avg=3121.73, stdev=46194.54 00:37:03.338 clat percentiles (usec): 00:37:03.338 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:37:03.338 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:03.338 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2999], 95.00th=[ 3523], 00:37:03.338 | 99.00th=[ 5342], 99.50th=[ 6194], 99.90th=[ 7570], 99.95th=[ 8586], 00:37:03.338 | 99.99th=[13435] 00:37:03.339 bw ( KiB/s): min=43640, max=97512, per=100.00%, avg=93229.38, stdev=8202.73, samples=107 00:37:03.339 iops : min=10910, max=24378, avg=23307.35, stdev=2050.68, samples=107 00:37:03.339 lat (usec) : 1000=0.01% 00:37:03.339 lat (msec) : 2=0.27%, 4=95.96%, 10=3.74%, 20=0.02%, >=2000=0.01% 00:37:03.339 cpu : usr=10.42%, sys=23.12%, ctx=74181, majf=0, minf=14 00:37:03.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:37:03.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:03.339 issued rwts: total=1258739,1257583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:03.339 00:37:03.339 Run status group 0 (all jobs): 00:37:03.339 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=4917MiB (5156MB), run=60002-60002msec 00:37:03.339 WRITE: bw=81.9MiB/s (85.8MB/s), 81.9MiB/s-81.9MiB/s (85.8MB/s-85.8MB/s), io=4912MiB (5151MB), run=60002-60002msec 00:37:03.339 00:37:03.339 Disk stats (read/write): 00:37:03.339 ublkb1: ios=1256064/1255031, merge=0/0, ticks=3637679/3680092, in_queue=7317771, util=99.94% 00:37:03.339 07:03:28 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 [2024-12-06 07:03:28.416936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:37:03.339 [2024-12-06 07:03:28.457841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:03.339 [2024-12-06 07:03:28.458197] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:37:03.339 [2024-12-06 07:03:28.468855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:03.339 [2024-12-06 07:03:28.468964] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:37:03.339 [2024-12-06 07:03:28.468982] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.339 07:03:28 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 [2024-12-06 07:03:28.484886] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:03.339 [2024-12-06 07:03:28.492789] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:03.339 [2024-12-06 07:03:28.492844] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.339 07:03:28 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:03.339 07:03:28 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:37:03.339 07:03:28 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75723 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75723 ']' 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75723 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75723 00:37:03.339 killing process with pid 75723 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75723' 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75723 00:37:03.339 07:03:28 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75723 00:37:03.339 [2024-12-06 07:03:30.066229] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:03.339 [2024-12-06 07:03:30.066282] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:03.339 00:37:03.339 real 1m5.331s 00:37:03.339 user 1m47.760s 00:37:03.339 sys 0m32.250s 00:37:03.339 07:03:31 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.339 07:03:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 ************************************ 00:37:03.339 END TEST ublk_recovery 00:37:03.339 ************************************ 00:37:03.339 07:03:31 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:37:03.339 07:03:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:37:03.339 07:03:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.339 07:03:31 -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 07:03:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:37:03.339 07:03:31 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:03.339 07:03:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:03.339 07:03:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.339 07:03:31 -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 ************************************ 00:37:03.339 START TEST ftl 00:37:03.339 ************************************ 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:03.339 * Looking for test storage... 00:37:03.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.339 07:03:31 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.339 07:03:31 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.339 07:03:31 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.339 07:03:31 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.339 07:03:31 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.339 07:03:31 ftl -- scripts/common.sh@344 -- # case "$op" in 00:37:03.339 07:03:31 ftl -- scripts/common.sh@345 -- # : 1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.339 07:03:31 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.339 07:03:31 ftl -- scripts/common.sh@365 -- # decimal 1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@353 -- # local d=1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.339 07:03:31 ftl -- scripts/common.sh@355 -- # echo 1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.339 07:03:31 ftl -- scripts/common.sh@366 -- # decimal 2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@353 -- # local d=2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.339 07:03:31 ftl -- scripts/common.sh@355 -- # echo 2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.339 07:03:31 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.339 07:03:31 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.339 07:03:31 ftl -- scripts/common.sh@368 -- # return 0 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:03.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.339 --rc genhtml_branch_coverage=1 00:37:03.339 --rc genhtml_function_coverage=1 00:37:03.339 --rc genhtml_legend=1 00:37:03.339 --rc geninfo_all_blocks=1 00:37:03.339 --rc geninfo_unexecuted_blocks=1 00:37:03.339 00:37:03.339 ' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:03.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.339 --rc genhtml_branch_coverage=1 00:37:03.339 --rc genhtml_function_coverage=1 00:37:03.339 --rc genhtml_legend=1 00:37:03.339 --rc geninfo_all_blocks=1 00:37:03.339 --rc geninfo_unexecuted_blocks=1 00:37:03.339 00:37:03.339 ' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:03.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.339 --rc genhtml_branch_coverage=1 00:37:03.339 --rc genhtml_function_coverage=1 00:37:03.339 --rc genhtml_legend=1 00:37:03.339 --rc geninfo_all_blocks=1 00:37:03.339 --rc geninfo_unexecuted_blocks=1 00:37:03.339 00:37:03.339 ' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:03.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.339 --rc genhtml_branch_coverage=1 00:37:03.339 --rc genhtml_function_coverage=1 00:37:03.339 --rc genhtml_legend=1 00:37:03.339 --rc geninfo_all_blocks=1 00:37:03.339 --rc geninfo_unexecuted_blocks=1 00:37:03.339 00:37:03.339 ' 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:03.339 07:03:31 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:03.339 07:03:31 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:03.339 07:03:31 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:03.339 07:03:31 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:03.339 07:03:31 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:03.339 07:03:31 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:03.339 07:03:31 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:03.339 07:03:31 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:03.339 07:03:31 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:03.339 07:03:31 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:03.339 07:03:31 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:03.339 07:03:31 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:03.339 07:03:31 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:03.339 07:03:31 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:03.339 07:03:31 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:03.339 07:03:31 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:03.339 07:03:31 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:03.339 07:03:31 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:03.339 07:03:31 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:03.339 07:03:31 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:03.339 07:03:31 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:03.339 07:03:31 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:03.339 07:03:31 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:03.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:03.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:03.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:03.339 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:03.339 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76514 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:37:03.339 07:03:31 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76514 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@835 -- # '[' -z 76514 ']' 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.339 07:03:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 [2024-12-06 07:03:31.950685] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:37:03.339 [2024-12-06 07:03:31.951132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76514 ] 00:37:03.339 [2024-12-06 07:03:32.118189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.339 [2024-12-06 07:03:32.199477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.339 07:03:32 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.339 07:03:32 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:03.339 07:03:32 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:37:03.339 07:03:33 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:37:03.339 07:03:33 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:37:03.339 07:03:33 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@50 -- # break 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:37:03.339 07:03:34 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:37:03.339 07:03:35 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:37:03.339 07:03:35 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:37:03.339 07:03:35 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:37:03.339 07:03:35 ftl -- ftl/ftl.sh@63 -- # break 00:37:03.339 07:03:35 ftl -- ftl/ftl.sh@66 -- # killprocess 76514 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@954 -- # '[' -z 76514 ']' 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@958 -- # kill -0 76514 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@959 -- # uname 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76514 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:03.339 killing process with pid 76514 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76514' 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@973 -- # kill 76514 00:37:03.339 07:03:35 ftl -- common/autotest_common.sh@978 -- # wait 76514 00:37:04.275 07:03:36 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:37:04.275 07:03:36 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:37:04.275 07:03:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:04.275 07:03:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.275 07:03:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:04.275 ************************************ 00:37:04.275 START TEST ftl_fio_basic 00:37:04.275 ************************************ 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:37:04.275 * Looking for test storage... 00:37:04.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.275 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:04.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.535 --rc genhtml_branch_coverage=1 00:37:04.535 --rc genhtml_function_coverage=1 00:37:04.535 --rc genhtml_legend=1 00:37:04.535 --rc geninfo_all_blocks=1 00:37:04.535 --rc geninfo_unexecuted_blocks=1 00:37:04.535 00:37:04.535 ' 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:04.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.535 --rc genhtml_branch_coverage=1 00:37:04.535 --rc genhtml_function_coverage=1 00:37:04.535 --rc genhtml_legend=1 00:37:04.535 --rc geninfo_all_blocks=1 00:37:04.535 --rc geninfo_unexecuted_blocks=1 00:37:04.535 00:37:04.535 ' 00:37:04.535 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:04.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.535 --rc genhtml_branch_coverage=1 00:37:04.535 --rc genhtml_function_coverage=1 00:37:04.536 --rc genhtml_legend=1 00:37:04.536 --rc geninfo_all_blocks=1 00:37:04.536 --rc geninfo_unexecuted_blocks=1 00:37:04.536 00:37:04.536 ' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:04.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.536 --rc genhtml_branch_coverage=1 00:37:04.536 --rc genhtml_function_coverage=1 00:37:04.536 --rc genhtml_legend=1 00:37:04.536 --rc geninfo_all_blocks=1 00:37:04.536 --rc geninfo_unexecuted_blocks=1 00:37:04.536 00:37:04.536 ' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76651 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76651 00:37:04.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76651 ']' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.536 07:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:04.536 [2024-12-06 07:03:37.030403] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:37:04.536 [2024-12-06 07:03:37.030571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76651 ] 00:37:04.796 [2024-12-06 07:03:37.208679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:04.796 [2024-12-06 07:03:37.290354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.796 [2024-12-06 07:03:37.290468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.796 [2024-12-06 07:03:37.290497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:37:05.734 07:03:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:05.993 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:06.253 { 00:37:06.253 "name": "nvme0n1", 00:37:06.253 "aliases": [ 00:37:06.253 "b1b03265-9024-4707-b1d6-1c57bc843406" 00:37:06.253 ], 00:37:06.253 "product_name": "NVMe disk", 00:37:06.253 "block_size": 4096, 00:37:06.253 "num_blocks": 1310720, 00:37:06.253 "uuid": "b1b03265-9024-4707-b1d6-1c57bc843406", 00:37:06.253 "numa_id": -1, 00:37:06.253 "assigned_rate_limits": { 00:37:06.253 "rw_ios_per_sec": 0, 00:37:06.253 "rw_mbytes_per_sec": 0, 00:37:06.253 "r_mbytes_per_sec": 0, 00:37:06.253 "w_mbytes_per_sec": 0 00:37:06.253 }, 00:37:06.253 "claimed": false, 00:37:06.253 "zoned": false, 00:37:06.253 "supported_io_types": { 00:37:06.253 "read": true, 00:37:06.253 "write": true, 00:37:06.253 "unmap": true, 00:37:06.253 "flush": true, 00:37:06.253 "reset": true, 00:37:06.253 "nvme_admin": true, 00:37:06.253 "nvme_io": true, 00:37:06.253 "nvme_io_md": false, 00:37:06.253 "write_zeroes": true, 00:37:06.253 "zcopy": false, 00:37:06.253 "get_zone_info": false, 00:37:06.253 "zone_management": false, 00:37:06.253 "zone_append": false, 00:37:06.253 "compare": true, 00:37:06.253 "compare_and_write": false, 00:37:06.253 "abort": true, 00:37:06.253 "seek_hole": false, 00:37:06.253 "seek_data": false, 00:37:06.253 "copy": true, 00:37:06.253 "nvme_iov_md": false 00:37:06.253 }, 00:37:06.253 "driver_specific": { 00:37:06.253 "nvme": [ 00:37:06.253 { 00:37:06.253 "pci_address": "0000:00:11.0", 00:37:06.253 "trid": { 00:37:06.253 "trtype": "PCIe", 00:37:06.253 "traddr": "0000:00:11.0" 00:37:06.253 }, 00:37:06.253 "ctrlr_data": { 00:37:06.253 "cntlid": 0, 00:37:06.253 "vendor_id": "0x1b36", 00:37:06.253 "model_number": "QEMU NVMe Ctrl", 00:37:06.253 "serial_number": "12341", 00:37:06.253 "firmware_revision": "8.0.0", 00:37:06.253 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:06.253 "oacs": { 00:37:06.253 "security": 0, 00:37:06.253 "format": 1, 00:37:06.253 "firmware": 0, 00:37:06.253 "ns_manage": 1 00:37:06.253 }, 00:37:06.253 "multi_ctrlr": false, 00:37:06.253 "ana_reporting": false 00:37:06.253 }, 00:37:06.253 "vs": { 00:37:06.253 "nvme_version": "1.4" 00:37:06.253 }, 00:37:06.253 "ns_data": { 00:37:06.253 "id": 1, 00:37:06.253 "can_share": false 00:37:06.253 } 00:37:06.253 } 00:37:06.253 ], 00:37:06.253 "mp_policy": "active_passive" 00:37:06.253 } 00:37:06.253 } 00:37:06.253 ]' 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:06.253 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:06.512 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:37:06.512 07:03:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:06.771 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d008129c-276e-425d-b417-361ca6f45427 00:37:06.771 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d008129c-276e-425d-b417-361ca6f45427 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:07.030 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:07.290 { 00:37:07.290 "name": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:07.290 "aliases": [ 00:37:07.290 "lvs/nvme0n1p0" 00:37:07.290 ], 00:37:07.290 "product_name": "Logical Volume", 00:37:07.290 "block_size": 4096, 00:37:07.290 "num_blocks": 26476544, 00:37:07.290 "uuid": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:07.290 "assigned_rate_limits": { 00:37:07.290 "rw_ios_per_sec": 0, 00:37:07.290 "rw_mbytes_per_sec": 0, 00:37:07.290 "r_mbytes_per_sec": 0, 00:37:07.290 "w_mbytes_per_sec": 0 00:37:07.290 }, 00:37:07.290 "claimed": false, 00:37:07.290 "zoned": false, 00:37:07.290 "supported_io_types": { 00:37:07.290 "read": true, 00:37:07.290 "write": true, 00:37:07.290 "unmap": true, 00:37:07.290 "flush": false, 00:37:07.290 "reset": true, 00:37:07.290 "nvme_admin": false, 00:37:07.290 "nvme_io": false, 00:37:07.290 "nvme_io_md": false, 00:37:07.290 "write_zeroes": true, 00:37:07.290 "zcopy": false, 00:37:07.290 "get_zone_info": false, 00:37:07.290 "zone_management": false, 00:37:07.290 "zone_append": false, 00:37:07.290 "compare": false, 00:37:07.290 "compare_and_write": false, 00:37:07.290 "abort": false, 00:37:07.290 "seek_hole": true, 00:37:07.290 "seek_data": true, 00:37:07.290 "copy": false, 00:37:07.290 "nvme_iov_md": false 00:37:07.290 }, 00:37:07.290 "driver_specific": { 00:37:07.290 "lvol": { 00:37:07.290 "lvol_store_uuid": "d008129c-276e-425d-b417-361ca6f45427", 00:37:07.290 "base_bdev": "nvme0n1", 00:37:07.290 "thin_provision": true, 00:37:07.290 "num_allocated_clusters": 0, 00:37:07.290 "snapshot": false, 00:37:07.290 "clone": false, 00:37:07.290 "esnap_clone": false 00:37:07.290 } 00:37:07.290 } 00:37:07.290 } 00:37:07.290 ]' 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:37:07.290 07:03:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:07.549 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e589795e-6c91-4bcb-85fa-223dc781af04 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:07.808 { 00:37:07.808 "name": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:07.808 "aliases": [ 00:37:07.808 "lvs/nvme0n1p0" 00:37:07.808 ], 00:37:07.808 "product_name": "Logical Volume", 00:37:07.808 "block_size": 4096, 00:37:07.808 "num_blocks": 26476544, 00:37:07.808 "uuid": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:07.808 "assigned_rate_limits": { 00:37:07.808 "rw_ios_per_sec": 0, 00:37:07.808 "rw_mbytes_per_sec": 0, 00:37:07.808 "r_mbytes_per_sec": 0, 00:37:07.808 "w_mbytes_per_sec": 0 00:37:07.808 }, 00:37:07.808 "claimed": false, 00:37:07.808 "zoned": false, 00:37:07.808 "supported_io_types": { 00:37:07.808 "read": true, 00:37:07.808 "write": true, 00:37:07.808 "unmap": true, 00:37:07.808 "flush": false, 00:37:07.808 "reset": true, 00:37:07.808 "nvme_admin": false, 00:37:07.808 "nvme_io": false, 00:37:07.808 "nvme_io_md": false, 00:37:07.808 "write_zeroes": true, 00:37:07.808 "zcopy": false, 00:37:07.808 "get_zone_info": false, 00:37:07.808 "zone_management": false, 00:37:07.808 "zone_append": false, 00:37:07.808 "compare": false, 00:37:07.808 "compare_and_write": false, 00:37:07.808 "abort": false, 00:37:07.808 "seek_hole": true, 00:37:07.808 "seek_data": true, 00:37:07.808 "copy": false, 00:37:07.808 "nvme_iov_md": false 00:37:07.808 }, 00:37:07.808 "driver_specific": { 00:37:07.808 "lvol": { 00:37:07.808 "lvol_store_uuid": "d008129c-276e-425d-b417-361ca6f45427", 00:37:07.808 "base_bdev": "nvme0n1", 00:37:07.808 "thin_provision": true, 00:37:07.808 "num_allocated_clusters": 0, 00:37:07.808 "snapshot": false, 00:37:07.808 "clone": false, 00:37:07.808 "esnap_clone": false 00:37:07.808 } 00:37:07.808 } 00:37:07.808 } 00:37:07.808 ]' 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:37:07.808 07:03:40 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:37:08.068 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e589795e-6c91-4bcb-85fa-223dc781af04 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e589795e-6c91-4bcb-85fa-223dc781af04 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:08.068 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e589795e-6c91-4bcb-85fa-223dc781af04 00:37:08.638 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:08.638 { 00:37:08.638 "name": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:08.638 "aliases": [ 00:37:08.638 "lvs/nvme0n1p0" 00:37:08.638 ], 00:37:08.638 "product_name": "Logical Volume", 00:37:08.638 "block_size": 4096, 00:37:08.638 "num_blocks": 26476544, 00:37:08.638 "uuid": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:08.638 "assigned_rate_limits": { 00:37:08.638 "rw_ios_per_sec": 0, 00:37:08.638 "rw_mbytes_per_sec": 0, 00:37:08.638 "r_mbytes_per_sec": 0, 00:37:08.638 "w_mbytes_per_sec": 0 00:37:08.638 }, 00:37:08.638 "claimed": false, 00:37:08.638 "zoned": false, 00:37:08.638 "supported_io_types": { 00:37:08.638 "read": true, 00:37:08.638 "write": true, 00:37:08.638 "unmap": true, 00:37:08.638 "flush": false, 00:37:08.638 "reset": true, 00:37:08.638 "nvme_admin": false, 00:37:08.638 "nvme_io": false, 00:37:08.638 "nvme_io_md": false, 00:37:08.638 "write_zeroes": true, 00:37:08.638 "zcopy": false, 00:37:08.638 "get_zone_info": false, 00:37:08.638 "zone_management": false, 00:37:08.638 "zone_append": false, 00:37:08.638 "compare": false, 00:37:08.638 "compare_and_write": false, 00:37:08.638 "abort": false, 00:37:08.638 "seek_hole": true, 00:37:08.638 "seek_data": true, 00:37:08.638 "copy": false, 00:37:08.638 "nvme_iov_md": false 00:37:08.638 }, 00:37:08.638 "driver_specific": { 00:37:08.638 "lvol": { 00:37:08.638 "lvol_store_uuid": "d008129c-276e-425d-b417-361ca6f45427", 00:37:08.638 "base_bdev": "nvme0n1", 00:37:08.638 "thin_provision": true, 00:37:08.638 "num_allocated_clusters": 0, 00:37:08.638 "snapshot": false, 00:37:08.638 "clone": false, 00:37:08.638 "esnap_clone": false 00:37:08.638 } 00:37:08.638 } 00:37:08.638 } 00:37:08.638 ]' 00:37:08.638 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:08.638 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:08.638 07:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:37:08.638 07:03:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e589795e-6c91-4bcb-85fa-223dc781af04 -c nvc0n1p0 --l2p_dram_limit 60 00:37:08.898 [2024-12-06 07:03:41.297523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.297575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:08.898 [2024-12-06 07:03:41.297615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:08.898 [2024-12-06 07:03:41.297626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.297781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.297823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:08.898 [2024-12-06 07:03:41.297841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:37:08.898 [2024-12-06 07:03:41.297853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.297916] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:08.898 [2024-12-06 07:03:41.298939] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:08.898 [2024-12-06 07:03:41.298987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.299002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:08.898 [2024-12-06 07:03:41.299016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:37:08.898 [2024-12-06 07:03:41.299027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.299189] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c00fcd0d-500f-46fb-835b-fe369a65bf51 00:37:08.898 [2024-12-06 07:03:41.300316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.300568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:08.898 [2024-12-06 07:03:41.300610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:37:08.898 [2024-12-06 07:03:41.300625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.305185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.305252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:08.898 [2024-12-06 07:03:41.305267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.466 ms 00:37:08.898 [2024-12-06 07:03:41.305280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.305410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.305432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:08.898 [2024-12-06 07:03:41.305444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:37:08.898 [2024-12-06 07:03:41.305460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.305534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.305554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:08.898 [2024-12-06 07:03:41.305566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:08.898 [2024-12-06 07:03:41.305589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.305631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:08.898 [2024-12-06 07:03:41.309784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.309820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:08.898 [2024-12-06 07:03:41.309855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:37:08.898 [2024-12-06 07:03:41.309868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.309930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.898 [2024-12-06 07:03:41.309951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:08.898 [2024-12-06 07:03:41.309965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:37:08.898 [2024-12-06 07:03:41.309975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.898 [2024-12-06 07:03:41.310026] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:08.898 [2024-12-06 07:03:41.310206] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:08.898 [2024-12-06 07:03:41.310241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:08.899 [2024-12-06 07:03:41.310257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:08.899 [2024-12-06 07:03:41.310279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:08.899 [2024-12-06 07:03:41.310292] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:08.899 [2024-12-06 07:03:41.310309] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:08.899 [2024-12-06 07:03:41.310320] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:08.899 [2024-12-06 07:03:41.310335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:08.899 [2024-12-06 07:03:41.310346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:08.899 [2024-12-06 07:03:41.310367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.899 [2024-12-06 07:03:41.310382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:08.899 [2024-12-06 07:03:41.310398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:37:08.899 [2024-12-06 07:03:41.310410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.899 [2024-12-06 07:03:41.310516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.899 [2024-12-06 07:03:41.310535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:08.899 [2024-12-06 07:03:41.310548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:37:08.899 [2024-12-06 07:03:41.310559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.899 [2024-12-06 07:03:41.310696] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:08.899 [2024-12-06 07:03:41.310977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:08.899 [2024-12-06 07:03:41.311035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:08.899 [2024-12-06 07:03:41.311090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.311209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:08.899 [2024-12-06 07:03:41.311258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.311300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:08.899 [2024-12-06 07:03:41.311338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:08.899 [2024-12-06 07:03:41.311477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:08.899 [2024-12-06 07:03:41.311515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:08.899 [2024-12-06 07:03:41.311629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:08.899 [2024-12-06 07:03:41.311681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:08.899 [2024-12-06 07:03:41.311886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:08.899 [2024-12-06 07:03:41.311939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:08.899 [2024-12-06 07:03:41.311982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:08.899 [2024-12-06 07:03:41.312147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:08.899 [2024-12-06 07:03:41.312235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:08.899 [2024-12-06 07:03:41.312288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:08.899 [2024-12-06 07:03:41.312324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:08.899 [2024-12-06 07:03:41.312361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:08.899 [2024-12-06 07:03:41.312396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:08.899 [2024-12-06 07:03:41.312438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:08.899 [2024-12-06 07:03:41.312485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:08.899 [2024-12-06 07:03:41.312497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:08.899 [2024-12-06 07:03:41.312526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:08.899 [2024-12-06 07:03:41.312536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:08.899 [2024-12-06 07:03:41.312548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:08.899 [2024-12-06 07:03:41.312559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:08.899 [2024-12-06 07:03:41.312596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:08.899 [2024-12-06 07:03:41.312608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:08.899 [2024-12-06 07:03:41.312645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:08.899 [2024-12-06 07:03:41.312656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:08.899 [2024-12-06 07:03:41.312679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:08.899 [2024-12-06 07:03:41.312694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:08.899 [2024-12-06 07:03:41.312704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:08.899 [2024-12-06 07:03:41.312716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:08.899 [2024-12-06 07:03:41.312726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:08.899 [2024-12-06 07:03:41.312738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:08.899 [2024-12-06 07:03:41.312799] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:08.899 [2024-12-06 07:03:41.312821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:08.899 [2024-12-06 07:03:41.312836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:08.899 [2024-12-06 07:03:41.312849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:08.899 [2024-12-06 07:03:41.312864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:08.899 [2024-12-06 07:03:41.312880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:08.899 [2024-12-06 07:03:41.312891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:08.899 [2024-12-06 07:03:41.312904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:08.899 [2024-12-06 07:03:41.312915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:08.900 [2024-12-06 07:03:41.312929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:08.900 [2024-12-06 07:03:41.312940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:08.900 [2024-12-06 07:03:41.312956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.312967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.312980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.312991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.313020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:08.900 [2024-12-06 07:03:41.313048] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:08.900 [2024-12-06 07:03:41.313063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.313094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:08.900 [2024-12-06 07:03:41.313108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:08.900 [2024-12-06 07:03:41.313119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:08.900 [2024-12-06 07:03:41.313133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:08.900 [2024-12-06 07:03:41.313147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:08.900 [2024-12-06 07:03:41.313161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:08.900 [2024-12-06 07:03:41.313173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:37:08.900 [2024-12-06 07:03:41.313186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:08.900 [2024-12-06 07:03:41.313267] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:08.900 [2024-12-06 07:03:41.313293] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:37:11.433 [2024-12-06 07:03:43.937554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:43.937631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:37:11.433 [2024-12-06 07:03:43.937668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2624.303 ms 00:37:11.433 [2024-12-06 07:03:43.937681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.433 [2024-12-06 07:03:43.965798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:43.965876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:11.433 [2024-12-06 07:03:43.965896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.883 ms 00:37:11.433 [2024-12-06 07:03:43.965909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.433 [2024-12-06 07:03:43.966084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:43.966108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:11.433 [2024-12-06 07:03:43.966120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:37:11.433 [2024-12-06 07:03:43.966134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.433 [2024-12-06 07:03:44.021145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:44.021223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:11.433 [2024-12-06 07:03:44.021246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.949 ms 00:37:11.433 [2024-12-06 07:03:44.021261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.433 [2024-12-06 07:03:44.021340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:44.021362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:11.433 [2024-12-06 07:03:44.021374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:37:11.433 [2024-12-06 07:03:44.021386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.433 [2024-12-06 07:03:44.021770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.433 [2024-12-06 07:03:44.021798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:11.433 [2024-12-06 07:03:44.021812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:37:11.433 [2024-12-06 07:03:44.021858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.022081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.022112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:11.691 [2024-12-06 07:03:44.022127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:37:11.691 [2024-12-06 07:03:44.022143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.038675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.038957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:11.691 [2024-12-06 07:03:44.038986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.496 ms 00:37:11.691 [2024-12-06 07:03:44.039003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.051608] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:37:11.691 [2024-12-06 07:03:44.066787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.067021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:11.691 [2024-12-06 07:03:44.067192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.625 ms 00:37:11.691 [2024-12-06 07:03:44.067308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.121829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.122065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:37:11.691 [2024-12-06 07:03:44.122248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.414 ms 00:37:11.691 [2024-12-06 07:03:44.122299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.122662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.122863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:11.691 [2024-12-06 07:03:44.123048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:37:11.691 [2024-12-06 07:03:44.123208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.691 [2024-12-06 07:03:44.151355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.691 [2024-12-06 07:03:44.151549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:37:11.692 [2024-12-06 07:03:44.151737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.927 ms 00:37:11.692 [2024-12-06 07:03:44.151858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.692 [2024-12-06 07:03:44.179099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.692 [2024-12-06 07:03:44.179286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:37:11.692 [2024-12-06 07:03:44.179492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.080 ms 00:37:11.692 [2024-12-06 07:03:44.179684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.692 [2024-12-06 07:03:44.180639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.692 [2024-12-06 07:03:44.180822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:11.692 [2024-12-06 07:03:44.180962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:37:11.692 [2024-12-06 07:03:44.181012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.692 [2024-12-06 07:03:44.268672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.692 [2024-12-06 07:03:44.268932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:37:11.692 [2024-12-06 07:03:44.269060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.476 ms 00:37:11.692 [2024-12-06 07:03:44.269114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.298248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.950 [2024-12-06 07:03:44.298432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:37:11.950 [2024-12-06 07:03:44.298577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.877 ms 00:37:11.950 [2024-12-06 07:03:44.298627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.325611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.950 [2024-12-06 07:03:44.325809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:37:11.950 [2024-12-06 07:03:44.325946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.903 ms 00:37:11.950 [2024-12-06 07:03:44.325996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.353250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.950 [2024-12-06 07:03:44.353288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:11.950 [2024-12-06 07:03:44.353324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.173 ms 00:37:11.950 [2024-12-06 07:03:44.353334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.353391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.950 [2024-12-06 07:03:44.353408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:11.950 [2024-12-06 07:03:44.353426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:11.950 [2024-12-06 07:03:44.353437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.353600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:11.950 [2024-12-06 07:03:44.353621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:11.950 [2024-12-06 07:03:44.353636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:37:11.950 [2024-12-06 07:03:44.353646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:11.950 [2024-12-06 07:03:44.355121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3056.941 ms, result 0 00:37:11.950 { 00:37:11.950 "name": "ftl0", 00:37:11.950 "uuid": "c00fcd0d-500f-46fb-835b-fe369a65bf51" 00:37:11.950 } 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:11.950 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:12.208 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:37:12.465 [ 00:37:12.465 { 00:37:12.465 "name": "ftl0", 00:37:12.465 "aliases": [ 00:37:12.465 "c00fcd0d-500f-46fb-835b-fe369a65bf51" 00:37:12.465 ], 00:37:12.465 "product_name": "FTL disk", 00:37:12.465 "block_size": 4096, 00:37:12.465 "num_blocks": 20971520, 00:37:12.465 "uuid": "c00fcd0d-500f-46fb-835b-fe369a65bf51", 00:37:12.465 "assigned_rate_limits": { 00:37:12.465 "rw_ios_per_sec": 0, 00:37:12.465 "rw_mbytes_per_sec": 0, 00:37:12.465 "r_mbytes_per_sec": 0, 00:37:12.465 "w_mbytes_per_sec": 0 00:37:12.465 }, 00:37:12.465 "claimed": false, 00:37:12.465 "zoned": false, 00:37:12.465 "supported_io_types": { 00:37:12.465 "read": true, 00:37:12.465 "write": true, 00:37:12.465 "unmap": true, 00:37:12.465 "flush": true, 00:37:12.465 "reset": false, 00:37:12.465 "nvme_admin": false, 00:37:12.465 "nvme_io": false, 00:37:12.465 "nvme_io_md": false, 00:37:12.465 "write_zeroes": true, 00:37:12.465 "zcopy": false, 00:37:12.465 "get_zone_info": false, 00:37:12.465 "zone_management": false, 00:37:12.465 "zone_append": false, 00:37:12.465 "compare": false, 00:37:12.465 "compare_and_write": false, 00:37:12.465 "abort": false, 00:37:12.465 "seek_hole": false, 00:37:12.465 "seek_data": false, 00:37:12.465 "copy": false, 00:37:12.465 "nvme_iov_md": false 00:37:12.465 }, 00:37:12.465 "driver_specific": { 00:37:12.465 "ftl": { 00:37:12.465 "base_bdev": "e589795e-6c91-4bcb-85fa-223dc781af04", 00:37:12.465 "cache": "nvc0n1p0" 00:37:12.465 } 00:37:12.465 } 00:37:12.465 } 00:37:12.465 ] 00:37:12.465 07:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:37:12.465 07:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:37:12.465 07:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:37:12.723 07:03:45 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:37:12.723 07:03:45 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:37:12.981 [2024-12-06 07:03:45.359606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.359679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:12.982 [2024-12-06 07:03:45.359698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:12.982 [2024-12-06 07:03:45.359712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.360008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:12.982 [2024-12-06 07:03:45.363292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.363469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:12.982 [2024-12-06 07:03:45.363618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:37:12.982 [2024-12-06 07:03:45.363669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.364319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.364493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:12.982 [2024-12-06 07:03:45.364641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:37:12.982 [2024-12-06 07:03:45.364692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.367858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.367938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:12.982 [2024-12-06 07:03:45.367994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:37:12.982 [2024-12-06 07:03:45.368022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.373950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.373981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:12.982 [2024-12-06 07:03:45.374014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.872 ms 00:37:12.982 [2024-12-06 07:03:45.374025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.401887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.402084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:12.982 [2024-12-06 07:03:45.402152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.778 ms 00:37:12.982 [2024-12-06 07:03:45.402165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.419483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.419523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:12.982 [2024-12-06 07:03:45.419562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.259 ms 00:37:12.982 [2024-12-06 07:03:45.419573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.419864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.419887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:12.982 [2024-12-06 07:03:45.419903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:37:12.982 [2024-12-06 07:03:45.419914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.447854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.447892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:12.982 [2024-12-06 07:03:45.447926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.906 ms 00:37:12.982 [2024-12-06 07:03:45.447937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.474458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.474641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:12.982 [2024-12-06 07:03:45.474690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.468 ms 00:37:12.982 [2024-12-06 07:03:45.474702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.500987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.501024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:12.982 [2024-12-06 07:03:45.501058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.189 ms 00:37:12.982 [2024-12-06 07:03:45.501069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.527016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.982 [2024-12-06 07:03:45.527054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:12.982 [2024-12-06 07:03:45.527088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.816 ms 00:37:12.982 [2024-12-06 07:03:45.527098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.982 [2024-12-06 07:03:45.527149] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:12.982 [2024-12-06 07:03:45.527170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:12.982 [2024-12-06 07:03:45.527704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.527988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:12.983 [2024-12-06 07:03:45.528592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:12.983 [2024-12-06 07:03:45.528605] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c00fcd0d-500f-46fb-835b-fe369a65bf51 00:37:12.983 [2024-12-06 07:03:45.528616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:12.983 [2024-12-06 07:03:45.528629] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:12.983 [2024-12-06 07:03:45.528639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:12.983 [2024-12-06 07:03:45.528653] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:12.983 [2024-12-06 07:03:45.528663] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:12.983 [2024-12-06 07:03:45.528674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:12.983 [2024-12-06 07:03:45.528684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:12.983 [2024-12-06 07:03:45.528695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:12.983 [2024-12-06 07:03:45.528703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:12.983 [2024-12-06 07:03:45.528715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.983 [2024-12-06 07:03:45.528725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:12.983 [2024-12-06 07:03:45.528750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:37:12.983 [2024-12-06 07:03:45.528763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.983 [2024-12-06 07:03:45.543212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.983 [2024-12-06 07:03:45.543249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:12.983 [2024-12-06 07:03:45.543283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.383 ms 00:37:12.983 [2024-12-06 07:03:45.543293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.983 [2024-12-06 07:03:45.543663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.983 [2024-12-06 07:03:45.543679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:12.983 [2024-12-06 07:03:45.543692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:37:12.983 [2024-12-06 07:03:45.543702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.593822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.594028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:13.241 [2024-12-06 07:03:45.594076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.594088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.594164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.594180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:13.241 [2024-12-06 07:03:45.594194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.594204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.594360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.594382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:13.241 [2024-12-06 07:03:45.594396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.594407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.594443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.594457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:13.241 [2024-12-06 07:03:45.594470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.594481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.684542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.684911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:13.241 [2024-12-06 07:03:45.684947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.684961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.755409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.755460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:13.241 [2024-12-06 07:03:45.755497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.755508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.755609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.755626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:13.241 [2024-12-06 07:03:45.755643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.755653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.755808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.755827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:13.241 [2024-12-06 07:03:45.755841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.755852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.755984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.756004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:13.241 [2024-12-06 07:03:45.756018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.756032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.756105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.756138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:13.241 [2024-12-06 07:03:45.756162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.756173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.756278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.756295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:13.241 [2024-12-06 07:03:45.756309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.241 [2024-12-06 07:03:45.756323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.241 [2024-12-06 07:03:45.756391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.241 [2024-12-06 07:03:45.756409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:13.242 [2024-12-06 07:03:45.756423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.242 [2024-12-06 07:03:45.756435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.242 [2024-12-06 07:03:45.756674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.012 ms, result 0 00:37:13.242 true 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76651 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76651 ']' 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76651 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76651 00:37:13.242 killing process with pid 76651 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76651' 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76651 00:37:13.242 07:03:45 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76651 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:37:17.433 07:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:17.433 07:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:17.433 07:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:17.433 07:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:37:17.433 07:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:17.433 07:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:17.693 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:37:17.693 fio-3.35 00:37:17.693 Starting 1 thread 00:37:22.968 00:37:22.968 test: (groupid=0, jobs=1): err= 0: pid=76855: Fri Dec 6 07:03:55 2024 00:37:22.968 read: IOPS=916, BW=60.9MiB/s (63.8MB/s)(255MiB/4182msec) 00:37:22.968 slat (nsec): min=5142, max=78077, avg=6948.59, stdev=3847.20 00:37:22.968 clat (usec): min=342, max=709, avg=485.28, stdev=45.30 00:37:22.968 lat (usec): min=348, max=723, avg=492.23, stdev=46.15 00:37:22.968 clat percentiles (usec): 00:37:22.968 | 1.00th=[ 416], 5.00th=[ 437], 10.00th=[ 441], 20.00th=[ 453], 00:37:22.968 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:37:22.968 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 578], 00:37:22.968 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 701], 00:37:22.968 | 99.99th=[ 709] 00:37:22.968 write: IOPS=923, BW=61.3MiB/s (64.3MB/s)(256MiB/4177msec); 0 zone resets 00:37:22.968 slat (nsec): min=19017, max=82619, avg=23309.19, stdev=6432.47 00:37:22.968 clat (usec): min=410, max=1054, avg=558.41, stdev=58.87 00:37:22.968 lat (usec): min=430, max=1076, avg=581.72, stdev=59.75 00:37:22.968 clat percentiles (usec): 00:37:22.968 | 1.00th=[ 453], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 523], 00:37:22.968 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 562], 00:37:22.968 | 70.00th=[ 570], 80.00th=[ 594], 90.00th=[ 627], 95.00th=[ 652], 00:37:22.968 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 914], 99.95th=[ 947], 00:37:22.968 | 99.99th=[ 1057] 00:37:22.968 bw ( KiB/s): min=61336, max=64192, per=100.00%, avg=62866.00, stdev=913.04, samples=8 00:37:22.968 iops : min= 902, max= 944, avg=924.50, stdev=13.43, samples=8 00:37:22.968 lat (usec) : 500=42.03%, 750=57.15%, 1000=0.81% 00:37:22.968 lat (msec) : 2=0.01% 00:37:22.968 cpu : usr=99.16%, sys=0.14%, ctx=9, majf=0, minf=1169 00:37:22.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.968 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:22.968 00:37:22.968 Run status group 0 (all jobs): 00:37:22.968 READ: bw=60.9MiB/s (63.8MB/s), 60.9MiB/s-60.9MiB/s (63.8MB/s-63.8MB/s), io=255MiB (267MB), run=4182-4182msec 00:37:22.968 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=256MiB (269MB), run=4177-4177msec 00:37:24.346 ----------------------------------------------------- 00:37:24.346 Suppressions used: 00:37:24.346 count bytes template 00:37:24.346 1 5 /usr/src/fio/parse.c 00:37:24.346 1 8 libtcmalloc_minimal.so 00:37:24.346 1 904 libcrypto.so 00:37:24.346 ----------------------------------------------------- 00:37:24.346 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:37:24.346 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:24.605 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:24.605 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:24.605 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:37:24.605 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:24.605 07:03:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:24.605 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:37:24.605 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:37:24.605 fio-3.35 00:37:24.605 Starting 2 threads 00:37:56.692 00:37:56.692 first_half: (groupid=0, jobs=1): err= 0: pid=76952: Fri Dec 6 07:04:27 2024 00:37:56.692 read: IOPS=2249, BW=8999KiB/s (9215kB/s)(256MiB/29104msec) 00:37:56.692 slat (nsec): min=4247, max=47635, avg=7712.21, stdev=3399.18 00:37:56.692 clat (usec): min=1067, max=309175, avg=48702.16, stdev=27362.19 00:37:56.692 lat (usec): min=1072, max=309182, avg=48709.88, stdev=27362.32 00:37:56.692 clat percentiles (msec): 00:37:56.692 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:37:56.692 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:37:56.692 | 70.00th=[ 43], 80.00th=[ 49], 90.00th=[ 52], 95.00th=[ 91], 00:37:56.692 | 99.00th=[ 190], 99.50th=[ 209], 99.90th=[ 255], 99.95th=[ 268], 00:37:56.692 | 99.99th=[ 305] 00:37:56.692 write: IOPS=2255, BW=9024KiB/s (9240kB/s)(256MiB/29050msec); 0 zone resets 00:37:56.692 slat (usec): min=5, max=229, avg= 8.91, stdev= 5.82 00:37:56.692 clat (usec): min=523, max=56045, avg=8147.46, stdev=8320.74 00:37:56.692 lat (usec): min=535, max=56055, avg=8156.37, stdev=8320.90 00:37:56.692 clat percentiles (usec): 00:37:56.692 | 1.00th=[ 1156], 5.00th=[ 1549], 10.00th=[ 1893], 20.00th=[ 3228], 00:37:56.692 | 30.00th=[ 4228], 40.00th=[ 5604], 50.00th=[ 6259], 60.00th=[ 7177], 00:37:56.692 | 70.00th=[ 7701], 80.00th=[ 9110], 90.00th=[15401], 95.00th=[23200], 00:37:56.692 | 99.00th=[45876], 99.50th=[49021], 99.90th=[53216], 99.95th=[54264], 00:37:56.692 | 99.99th=[55313] 00:37:56.692 bw ( KiB/s): min= 9144, max=41472, per=100.00%, avg=24792.38, stdev=9407.43, samples=21 00:37:56.692 iops : min= 2286, max=10368, avg=6198.10, stdev=2351.86, samples=21 00:37:56.692 lat (usec) : 750=0.02%, 1000=0.19% 00:37:56.692 lat (msec) : 2=5.47%, 4=7.94%, 10=27.69%, 20=7.34%, 50=42.86% 00:37:56.692 lat (msec) : 100=6.18%, 250=2.25%, 500=0.06% 00:37:56.692 cpu : usr=98.77%, sys=0.58%, ctx=276, majf=0, minf=5537 00:37:56.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:37:56.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.692 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:56.692 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:56.692 second_half: (groupid=0, jobs=1): err= 0: pid=76953: Fri Dec 6 07:04:27 2024 00:37:56.692 read: IOPS=2270, BW=9082KiB/s (9300kB/s)(256MiB/28844msec) 00:37:56.692 slat (usec): min=4, max=126, avg= 7.69, stdev= 3.44 00:37:56.692 clat (msec): min=11, max=247, avg=49.03, stdev=24.24 00:37:56.692 lat (msec): min=12, max=248, avg=49.03, stdev=24.24 00:37:56.692 clat percentiles (msec): 00:37:56.692 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:37:56.692 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:37:56.692 | 70.00th=[ 44], 80.00th=[ 50], 90.00th=[ 53], 95.00th=[ 82], 00:37:56.692 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 224], 00:37:56.692 | 99.99th=[ 243] 00:37:56.692 write: IOPS=2285, BW=9141KiB/s (9360kB/s)(256MiB/28678msec); 0 zone resets 00:37:56.692 slat (usec): min=4, max=253, avg= 8.64, stdev= 5.47 00:37:56.692 clat (usec): min=476, max=48133, avg=7318.19, stdev=4662.70 00:37:56.692 lat (usec): min=489, max=48140, avg=7326.83, stdev=4662.80 00:37:56.692 clat percentiles (usec): 00:37:56.692 | 1.00th=[ 1270], 5.00th=[ 2180], 10.00th=[ 3195], 20.00th=[ 4146], 00:37:56.692 | 30.00th=[ 5145], 40.00th=[ 5735], 50.00th=[ 6390], 60.00th=[ 7046], 00:37:56.692 | 70.00th=[ 7504], 80.00th=[ 8979], 90.00th=[13566], 95.00th=[15795], 00:37:56.692 | 99.00th=[25035], 99.50th=[32900], 99.90th=[44827], 99.95th=[45876], 00:37:56.692 | 99.99th=[46924] 00:37:56.692 bw ( KiB/s): min= 4608, max=41080, per=100.00%, avg=22707.48, stdev=11638.25, samples=23 00:37:56.692 iops : min= 1152, max=10270, avg=5676.87, stdev=2909.56, samples=23 00:37:56.692 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.16% 00:37:56.692 lat (msec) : 2=1.79%, 4=7.04%, 10=32.31%, 20=8.07%, 50=41.46% 00:37:56.692 lat (msec) : 100=7.08%, 250=2.02% 00:37:56.692 cpu : usr=98.74%, sys=0.46%, ctx=183, majf=0, minf=5576 00:37:56.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:56.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.692 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:56.692 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:56.692 00:37:56.692 Run status group 0 (all jobs): 00:37:56.692 READ: bw=17.6MiB/s (18.4MB/s), 8999KiB/s-9082KiB/s (9215kB/s-9300kB/s), io=512MiB (536MB), run=28844-29104msec 00:37:56.692 WRITE: bw=17.6MiB/s (18.5MB/s), 9024KiB/s-9141KiB/s (9240kB/s-9360kB/s), io=512MiB (537MB), run=28678-29050msec 00:37:56.951 ----------------------------------------------------- 00:37:56.951 Suppressions used: 00:37:56.951 count bytes template 00:37:56.951 2 10 /usr/src/fio/parse.c 00:37:56.951 2 192 /usr/src/fio/iolog.c 00:37:56.951 1 8 libtcmalloc_minimal.so 00:37:56.951 1 904 libcrypto.so 00:37:56.951 ----------------------------------------------------- 00:37:56.951 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:56.951 07:04:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:37:57.211 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:37:57.211 fio-3.35 00:37:57.211 Starting 1 thread 00:38:15.346 00:38:15.346 test: (groupid=0, jobs=1): err= 0: pid=77306: Fri Dec 6 07:04:47 2024 00:38:15.346 read: IOPS=5848, BW=22.8MiB/s (24.0MB/s)(255MiB/11149msec) 00:38:15.346 slat (nsec): min=4322, max=63585, avg=7076.83, stdev=3589.65 00:38:15.346 clat (usec): min=1031, max=42863, avg=21874.04, stdev=1135.49 00:38:15.346 lat (usec): min=1050, max=42871, avg=21881.12, stdev=1135.55 00:38:15.346 clat percentiles (usec): 00:38:15.346 | 1.00th=[20579], 5.00th=[20841], 10.00th=[21103], 20.00th=[21365], 00:38:15.346 | 30.00th=[21627], 40.00th=[21627], 50.00th=[21890], 60.00th=[21890], 00:38:15.346 | 70.00th=[22152], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:38:15.346 | 99.00th=[26608], 99.50th=[28967], 99.90th=[32113], 99.95th=[37487], 00:38:15.346 | 99.99th=[41681] 00:38:15.346 write: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(256MiB/5763msec); 0 zone resets 00:38:15.346 slat (usec): min=4, max=408, avg=10.12, stdev= 7.11 00:38:15.346 clat (usec): min=724, max=73948, avg=11189.48, stdev=14541.36 00:38:15.346 lat (usec): min=732, max=73956, avg=11199.60, stdev=14541.41 00:38:15.346 clat percentiles (usec): 00:38:15.346 | 1.00th=[ 1029], 5.00th=[ 1237], 10.00th=[ 1352], 20.00th=[ 1516], 00:38:15.346 | 30.00th=[ 1713], 40.00th=[ 2212], 50.00th=[ 7111], 60.00th=[ 8029], 00:38:15.346 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[42206], 95.00th=[44303], 00:38:15.346 | 99.00th=[49021], 99.50th=[52167], 99.90th=[59507], 99.95th=[61080], 00:38:15.346 | 99.99th=[71828] 00:38:15.346 bw ( KiB/s): min=17256, max=65544, per=96.05%, avg=43690.00, stdev=12950.68, samples=12 00:38:15.346 iops : min= 4314, max=16386, avg=10922.67, stdev=3237.61, samples=12 00:38:15.346 lat (usec) : 750=0.01%, 1000=0.36% 00:38:15.346 lat (msec) : 2=18.65%, 4=1.91%, 10=17.41%, 20=3.84%, 50=57.47% 00:38:15.346 lat (msec) : 100=0.36% 00:38:15.346 cpu : usr=98.14%, sys=0.92%, ctx=35, majf=0, minf=5565 00:38:15.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:38:15.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.346 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:15.346 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:15.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:15.346 00:38:15.346 Run status group 0 (all jobs): 00:38:15.346 READ: bw=22.8MiB/s (24.0MB/s), 22.8MiB/s-22.8MiB/s (24.0MB/s-24.0MB/s), io=255MiB (267MB), run=11149-11149msec 00:38:15.346 WRITE: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=256MiB (268MB), run=5763-5763msec 00:38:16.721 ----------------------------------------------------- 00:38:16.721 Suppressions used: 00:38:16.721 count bytes template 00:38:16.721 1 5 /usr/src/fio/parse.c 00:38:16.721 2 192 /usr/src/fio/iolog.c 00:38:16.721 1 8 libtcmalloc_minimal.so 00:38:16.721 1 904 libcrypto.so 00:38:16.721 ----------------------------------------------------- 00:38:16.721 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:38:16.978 Remove shared memory files 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57914 /dev/shm/spdk_tgt_trace.pid75578 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:38:16.978 ************************************ 00:38:16.978 END TEST ftl_fio_basic 00:38:16.978 ************************************ 00:38:16.978 00:38:16.978 real 1m12.692s 00:38:16.978 user 2m40.218s 00:38:16.978 sys 0m3.658s 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.978 07:04:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:16.978 07:04:49 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:38:16.978 07:04:49 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:16.978 07:04:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.978 07:04:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:16.978 ************************************ 00:38:16.978 START TEST ftl_bdevperf 00:38:16.978 ************************************ 00:38:16.978 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:38:16.978 * Looking for test storage... 00:38:16.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:16.978 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:16.978 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:38:16.978 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.236 --rc genhtml_branch_coverage=1 00:38:17.236 --rc genhtml_function_coverage=1 00:38:17.236 --rc genhtml_legend=1 00:38:17.236 --rc geninfo_all_blocks=1 00:38:17.236 --rc geninfo_unexecuted_blocks=1 00:38:17.236 00:38:17.236 ' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.236 --rc genhtml_branch_coverage=1 00:38:17.236 --rc genhtml_function_coverage=1 00:38:17.236 --rc genhtml_legend=1 00:38:17.236 --rc geninfo_all_blocks=1 00:38:17.236 --rc geninfo_unexecuted_blocks=1 00:38:17.236 00:38:17.236 ' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.236 --rc genhtml_branch_coverage=1 00:38:17.236 --rc genhtml_function_coverage=1 00:38:17.236 --rc genhtml_legend=1 00:38:17.236 --rc geninfo_all_blocks=1 00:38:17.236 --rc geninfo_unexecuted_blocks=1 00:38:17.236 00:38:17.236 ' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.236 --rc genhtml_branch_coverage=1 00:38:17.236 --rc genhtml_function_coverage=1 00:38:17.236 --rc genhtml_legend=1 00:38:17.236 --rc geninfo_all_blocks=1 00:38:17.236 --rc geninfo_unexecuted_blocks=1 00:38:17.236 00:38:17.236 ' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:17.236 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77573 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77573 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77573 ']' 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.237 07:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:17.237 [2024-12-06 07:04:49.732696] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:38:17.237 [2024-12-06 07:04:49.733040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77573 ] 00:38:17.495 [2024-12-06 07:04:49.894809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.495 [2024-12-06 07:04:49.979316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:38:18.430 07:04:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:18.689 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:18.948 { 00:38:18.948 "name": "nvme0n1", 00:38:18.948 "aliases": [ 00:38:18.948 "5072153d-5b96-41e7-91e3-b6077b7965e5" 00:38:18.948 ], 00:38:18.948 "product_name": "NVMe disk", 00:38:18.948 "block_size": 4096, 00:38:18.948 "num_blocks": 1310720, 00:38:18.948 "uuid": "5072153d-5b96-41e7-91e3-b6077b7965e5", 00:38:18.948 "numa_id": -1, 00:38:18.948 "assigned_rate_limits": { 00:38:18.948 "rw_ios_per_sec": 0, 00:38:18.948 "rw_mbytes_per_sec": 0, 00:38:18.948 "r_mbytes_per_sec": 0, 00:38:18.948 "w_mbytes_per_sec": 0 00:38:18.948 }, 00:38:18.948 "claimed": true, 00:38:18.948 "claim_type": "read_many_write_one", 00:38:18.948 "zoned": false, 00:38:18.948 "supported_io_types": { 00:38:18.948 "read": true, 00:38:18.948 "write": true, 00:38:18.948 "unmap": true, 00:38:18.948 "flush": true, 00:38:18.948 "reset": true, 00:38:18.948 "nvme_admin": true, 00:38:18.948 "nvme_io": true, 00:38:18.948 "nvme_io_md": false, 00:38:18.948 "write_zeroes": true, 00:38:18.948 "zcopy": false, 00:38:18.948 "get_zone_info": false, 00:38:18.948 "zone_management": false, 00:38:18.948 "zone_append": false, 00:38:18.948 "compare": true, 00:38:18.948 "compare_and_write": false, 00:38:18.948 "abort": true, 00:38:18.948 "seek_hole": false, 00:38:18.948 "seek_data": false, 00:38:18.948 "copy": true, 00:38:18.948 "nvme_iov_md": false 00:38:18.948 }, 00:38:18.948 "driver_specific": { 00:38:18.948 "nvme": [ 00:38:18.948 { 00:38:18.948 "pci_address": "0000:00:11.0", 00:38:18.948 "trid": { 00:38:18.948 "trtype": "PCIe", 00:38:18.948 "traddr": "0000:00:11.0" 00:38:18.948 }, 00:38:18.948 "ctrlr_data": { 00:38:18.948 "cntlid": 0, 00:38:18.948 "vendor_id": "0x1b36", 00:38:18.948 "model_number": "QEMU NVMe Ctrl", 00:38:18.948 "serial_number": "12341", 00:38:18.948 "firmware_revision": "8.0.0", 00:38:18.948 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:18.948 "oacs": { 00:38:18.948 "security": 0, 00:38:18.948 "format": 1, 00:38:18.948 "firmware": 0, 00:38:18.948 "ns_manage": 1 00:38:18.948 }, 00:38:18.948 "multi_ctrlr": false, 00:38:18.948 "ana_reporting": false 00:38:18.948 }, 00:38:18.948 "vs": { 00:38:18.948 "nvme_version": "1.4" 00:38:18.948 }, 00:38:18.948 "ns_data": { 00:38:18.948 "id": 1, 00:38:18.948 "can_share": false 00:38:18.948 } 00:38:18.948 } 00:38:18.948 ], 00:38:18.948 "mp_policy": "active_passive" 00:38:18.948 } 00:38:18.948 } 00:38:18.948 ]' 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:18.948 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:19.207 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d008129c-276e-425d-b417-361ca6f45427 00:38:19.207 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:38:19.207 07:04:51 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d008129c-276e-425d-b417-361ca6f45427 00:38:19.466 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=79ac54ce-ed33-42cb-8654-ec42f5893485 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 79ac54ce-ed33-42cb-8654-ec42f5893485 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:20.034 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.293 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:20.293 { 00:38:20.293 "name": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:20.293 "aliases": [ 00:38:20.293 "lvs/nvme0n1p0" 00:38:20.293 ], 00:38:20.293 "product_name": "Logical Volume", 00:38:20.293 "block_size": 4096, 00:38:20.293 "num_blocks": 26476544, 00:38:20.293 "uuid": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:20.293 "assigned_rate_limits": { 00:38:20.293 "rw_ios_per_sec": 0, 00:38:20.293 "rw_mbytes_per_sec": 0, 00:38:20.293 "r_mbytes_per_sec": 0, 00:38:20.293 "w_mbytes_per_sec": 0 00:38:20.293 }, 00:38:20.293 "claimed": false, 00:38:20.293 "zoned": false, 00:38:20.293 "supported_io_types": { 00:38:20.293 "read": true, 00:38:20.293 "write": true, 00:38:20.293 "unmap": true, 00:38:20.293 "flush": false, 00:38:20.293 "reset": true, 00:38:20.293 "nvme_admin": false, 00:38:20.293 "nvme_io": false, 00:38:20.293 "nvme_io_md": false, 00:38:20.293 "write_zeroes": true, 00:38:20.293 "zcopy": false, 00:38:20.293 "get_zone_info": false, 00:38:20.293 "zone_management": false, 00:38:20.293 "zone_append": false, 00:38:20.293 "compare": false, 00:38:20.293 "compare_and_write": false, 00:38:20.293 "abort": false, 00:38:20.293 "seek_hole": true, 00:38:20.293 "seek_data": true, 00:38:20.293 "copy": false, 00:38:20.293 "nvme_iov_md": false 00:38:20.293 }, 00:38:20.293 "driver_specific": { 00:38:20.293 "lvol": { 00:38:20.293 "lvol_store_uuid": "79ac54ce-ed33-42cb-8654-ec42f5893485", 00:38:20.293 "base_bdev": "nvme0n1", 00:38:20.293 "thin_provision": true, 00:38:20.293 "num_allocated_clusters": 0, 00:38:20.293 "snapshot": false, 00:38:20.293 "clone": false, 00:38:20.293 "esnap_clone": false 00:38:20.293 } 00:38:20.293 } 00:38:20.293 } 00:38:20.293 ]' 00:38:20.293 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:38:20.553 07:04:52 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:20.813 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:21.073 { 00:38:21.073 "name": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:21.073 "aliases": [ 00:38:21.073 "lvs/nvme0n1p0" 00:38:21.073 ], 00:38:21.073 "product_name": "Logical Volume", 00:38:21.073 "block_size": 4096, 00:38:21.073 "num_blocks": 26476544, 00:38:21.073 "uuid": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:21.073 "assigned_rate_limits": { 00:38:21.073 "rw_ios_per_sec": 0, 00:38:21.073 "rw_mbytes_per_sec": 0, 00:38:21.073 "r_mbytes_per_sec": 0, 00:38:21.073 "w_mbytes_per_sec": 0 00:38:21.073 }, 00:38:21.073 "claimed": false, 00:38:21.073 "zoned": false, 00:38:21.073 "supported_io_types": { 00:38:21.073 "read": true, 00:38:21.073 "write": true, 00:38:21.073 "unmap": true, 00:38:21.073 "flush": false, 00:38:21.073 "reset": true, 00:38:21.073 "nvme_admin": false, 00:38:21.073 "nvme_io": false, 00:38:21.073 "nvme_io_md": false, 00:38:21.073 "write_zeroes": true, 00:38:21.073 "zcopy": false, 00:38:21.073 "get_zone_info": false, 00:38:21.073 "zone_management": false, 00:38:21.073 "zone_append": false, 00:38:21.073 "compare": false, 00:38:21.073 "compare_and_write": false, 00:38:21.073 "abort": false, 00:38:21.073 "seek_hole": true, 00:38:21.073 "seek_data": true, 00:38:21.073 "copy": false, 00:38:21.073 "nvme_iov_md": false 00:38:21.073 }, 00:38:21.073 "driver_specific": { 00:38:21.073 "lvol": { 00:38:21.073 "lvol_store_uuid": "79ac54ce-ed33-42cb-8654-ec42f5893485", 00:38:21.073 "base_bdev": "nvme0n1", 00:38:21.073 "thin_provision": true, 00:38:21.073 "num_allocated_clusters": 0, 00:38:21.073 "snapshot": false, 00:38:21.073 "clone": false, 00:38:21.073 "esnap_clone": false 00:38:21.073 } 00:38:21.073 } 00:38:21.073 } 00:38:21.073 ]' 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:38:21.073 07:04:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:21.332 07:04:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 00:38:21.591 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:21.591 { 00:38:21.591 "name": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:21.591 "aliases": [ 00:38:21.591 "lvs/nvme0n1p0" 00:38:21.591 ], 00:38:21.591 "product_name": "Logical Volume", 00:38:21.591 "block_size": 4096, 00:38:21.591 "num_blocks": 26476544, 00:38:21.591 "uuid": "6e2f0687-7a8b-4fea-9e59-3d457e0a2592", 00:38:21.591 "assigned_rate_limits": { 00:38:21.591 "rw_ios_per_sec": 0, 00:38:21.591 "rw_mbytes_per_sec": 0, 00:38:21.591 "r_mbytes_per_sec": 0, 00:38:21.591 "w_mbytes_per_sec": 0 00:38:21.591 }, 00:38:21.591 "claimed": false, 00:38:21.591 "zoned": false, 00:38:21.591 "supported_io_types": { 00:38:21.591 "read": true, 00:38:21.591 "write": true, 00:38:21.591 "unmap": true, 00:38:21.591 "flush": false, 00:38:21.591 "reset": true, 00:38:21.591 "nvme_admin": false, 00:38:21.591 "nvme_io": false, 00:38:21.591 "nvme_io_md": false, 00:38:21.591 "write_zeroes": true, 00:38:21.591 "zcopy": false, 00:38:21.591 "get_zone_info": false, 00:38:21.591 "zone_management": false, 00:38:21.591 "zone_append": false, 00:38:21.591 "compare": false, 00:38:21.591 "compare_and_write": false, 00:38:21.591 "abort": false, 00:38:21.591 "seek_hole": true, 00:38:21.591 "seek_data": true, 00:38:21.591 "copy": false, 00:38:21.591 "nvme_iov_md": false 00:38:21.591 }, 00:38:21.591 "driver_specific": { 00:38:21.591 "lvol": { 00:38:21.591 "lvol_store_uuid": "79ac54ce-ed33-42cb-8654-ec42f5893485", 00:38:21.591 "base_bdev": "nvme0n1", 00:38:21.591 "thin_provision": true, 00:38:21.591 "num_allocated_clusters": 0, 00:38:21.591 "snapshot": false, 00:38:21.591 "clone": false, 00:38:21.591 "esnap_clone": false 00:38:21.591 } 00:38:21.591 } 00:38:21.591 } 00:38:21.591 ]' 00:38:21.591 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:21.591 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:21.591 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:21.850 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:21.850 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:21.850 07:04:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:21.850 07:04:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:38:21.850 07:04:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e2f0687-7a8b-4fea-9e59-3d457e0a2592 -c nvc0n1p0 --l2p_dram_limit 20 00:38:21.850 [2024-12-06 07:04:54.415811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.850 [2024-12-06 07:04:54.416021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:21.850 [2024-12-06 07:04:54.416051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:21.850 [2024-12-06 07:04:54.416065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.850 [2024-12-06 07:04:54.416144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.850 [2024-12-06 07:04:54.416163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:21.850 [2024-12-06 07:04:54.416175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:38:21.850 [2024-12-06 07:04:54.416186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.850 [2024-12-06 07:04:54.416236] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:21.850 [2024-12-06 07:04:54.417187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:21.850 [2024-12-06 07:04:54.417210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.850 [2024-12-06 07:04:54.417223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:21.850 [2024-12-06 07:04:54.417235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:38:21.850 [2024-12-06 07:04:54.417246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.850 [2024-12-06 07:04:54.417394] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 13fdb5a6-dc94-4775-aa4b-999dc35951ec 00:38:21.850 [2024-12-06 07:04:54.418416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.850 [2024-12-06 07:04:54.418451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:21.850 [2024-12-06 07:04:54.418486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:38:21.850 [2024-12-06 07:04:54.418496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.850 [2024-12-06 07:04:54.422511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.422549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:21.851 [2024-12-06 07:04:54.422581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:38:21.851 [2024-12-06 07:04:54.422593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.422692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.422723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:21.851 [2024-12-06 07:04:54.422753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:38:21.851 [2024-12-06 07:04:54.422766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.422845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.422861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:21.851 [2024-12-06 07:04:54.422874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:21.851 [2024-12-06 07:04:54.422884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.422925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:21.851 [2024-12-06 07:04:54.426681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.426909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:21.851 [2024-12-06 07:04:54.426935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:38:21.851 [2024-12-06 07:04:54.426955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.426998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.427015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:21.851 [2024-12-06 07:04:54.427026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:21.851 [2024-12-06 07:04:54.427037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.427074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:21.851 [2024-12-06 07:04:54.427248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:21.851 [2024-12-06 07:04:54.427264] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:21.851 [2024-12-06 07:04:54.427279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:21.851 [2024-12-06 07:04:54.427292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427313] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427340] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:21.851 [2024-12-06 07:04:54.427350] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:21.851 [2024-12-06 07:04:54.427360] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:21.851 [2024-12-06 07:04:54.427372] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:21.851 [2024-12-06 07:04:54.427384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.427395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:21.851 [2024-12-06 07:04:54.427405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:38:21.851 [2024-12-06 07:04:54.427416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.427505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.851 [2024-12-06 07:04:54.427519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:21.851 [2024-12-06 07:04:54.427528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:38:21.851 [2024-12-06 07:04:54.427540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.851 [2024-12-06 07:04:54.427617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:21.851 [2024-12-06 07:04:54.427634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:21.851 [2024-12-06 07:04:54.427644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:21.851 [2024-12-06 07:04:54.427674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:21.851 [2024-12-06 07:04:54.427701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:21.851 [2024-12-06 07:04:54.427718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:21.851 [2024-12-06 07:04:54.427741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:21.851 [2024-12-06 07:04:54.427751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:21.851 [2024-12-06 07:04:54.427761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:21.851 [2024-12-06 07:04:54.427769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:21.851 [2024-12-06 07:04:54.427816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:21.851 [2024-12-06 07:04:54.427855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:21.851 [2024-12-06 07:04:54.427888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:21.851 [2024-12-06 07:04:54.427918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:21.851 [2024-12-06 07:04:54.427946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:21.851 [2024-12-06 07:04:54.427965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:21.851 [2024-12-06 07:04:54.427975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:21.851 [2024-12-06 07:04:54.427988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:21.851 [2024-12-06 07:04:54.428000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:21.851 [2024-12-06 07:04:54.428008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:21.851 [2024-12-06 07:04:54.428019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:21.851 [2024-12-06 07:04:54.428028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:21.851 [2024-12-06 07:04:54.428038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:21.851 [2024-12-06 07:04:54.428047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:21.851 [2024-12-06 07:04:54.428058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:21.851 [2024-12-06 07:04:54.428067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:21.852 [2024-12-06 07:04:54.428077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.852 [2024-12-06 07:04:54.428086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:21.852 [2024-12-06 07:04:54.428096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:21.852 [2024-12-06 07:04:54.428105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.852 [2024-12-06 07:04:54.428115] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:21.852 [2024-12-06 07:04:54.428125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:21.852 [2024-12-06 07:04:54.428152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:21.852 [2024-12-06 07:04:54.428177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:21.852 [2024-12-06 07:04:54.428206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:21.852 [2024-12-06 07:04:54.428259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:21.852 [2024-12-06 07:04:54.428272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:21.852 [2024-12-06 07:04:54.428283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:21.852 [2024-12-06 07:04:54.428295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:21.852 [2024-12-06 07:04:54.428304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:21.852 [2024-12-06 07:04:54.428317] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:21.852 [2024-12-06 07:04:54.428331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:21.852 [2024-12-06 07:04:54.428355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:21.852 [2024-12-06 07:04:54.428379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:21.852 [2024-12-06 07:04:54.428389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:21.852 [2024-12-06 07:04:54.428401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:21.852 [2024-12-06 07:04:54.428412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:21.852 [2024-12-06 07:04:54.428424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:21.852 [2024-12-06 07:04:54.428434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:21.852 [2024-12-06 07:04:54.428449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:21.852 [2024-12-06 07:04:54.428460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:21.852 [2024-12-06 07:04:54.428518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:21.852 [2024-12-06 07:04:54.428530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:21.852 [2024-12-06 07:04:54.428571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:21.852 [2024-12-06 07:04:54.428583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:21.852 [2024-12-06 07:04:54.428608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:21.852 [2024-12-06 07:04:54.428621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:21.852 [2024-12-06 07:04:54.428631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:21.852 [2024-12-06 07:04:54.428643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:38:21.852 [2024-12-06 07:04:54.428653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:21.852 [2024-12-06 07:04:54.428709] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:21.852 [2024-12-06 07:04:54.428726] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:24.398 [2024-12-06 07:04:56.866172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.866266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:24.398 [2024-12-06 07:04:56.866304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2437.472 ms 00:38:24.398 [2024-12-06 07:04:56.866315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.892890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.892942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:24.398 [2024-12-06 07:04:56.892979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.359 ms 00:38:24.398 [2024-12-06 07:04:56.892990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.893155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.893172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:24.398 [2024-12-06 07:04:56.893188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:38:24.398 [2024-12-06 07:04:56.893198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.939902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.940177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:24.398 [2024-12-06 07:04:56.940210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.636 ms 00:38:24.398 [2024-12-06 07:04:56.940261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.940317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.940333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:24.398 [2024-12-06 07:04:56.940347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:24.398 [2024-12-06 07:04:56.940360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.940789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.940829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:24.398 [2024-12-06 07:04:56.940845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:38:24.398 [2024-12-06 07:04:56.940855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.940998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.941014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:24.398 [2024-12-06 07:04:56.941028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:38:24.398 [2024-12-06 07:04:56.941038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.954720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.954755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:24.398 [2024-12-06 07:04:56.954788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.658 ms 00:38:24.398 [2024-12-06 07:04:56.954810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.398 [2024-12-06 07:04:56.965717] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:38:24.398 [2024-12-06 07:04:56.970230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.398 [2024-12-06 07:04:56.970281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:24.398 [2024-12-06 07:04:56.970296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.343 ms 00:38:24.398 [2024-12-06 07:04:56.970308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.032704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.032799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:24.657 [2024-12-06 07:04:57.032819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.363 ms 00:38:24.657 [2024-12-06 07:04:57.032831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.033016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.033038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:24.657 [2024-12-06 07:04:57.033050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:38:24.657 [2024-12-06 07:04:57.033064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.058440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.058499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:24.657 [2024-12-06 07:04:57.058516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.323 ms 00:38:24.657 [2024-12-06 07:04:57.058528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.083403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.083460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:24.657 [2024-12-06 07:04:57.083476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.834 ms 00:38:24.657 [2024-12-06 07:04:57.083488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.084232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.084298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:24.657 [2024-12-06 07:04:57.084313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:38:24.657 [2024-12-06 07:04:57.084327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.160496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.160623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:24.657 [2024-12-06 07:04:57.160643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.125 ms 00:38:24.657 [2024-12-06 07:04:57.160687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.657 [2024-12-06 07:04:57.188467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.657 [2024-12-06 07:04:57.188717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:24.657 [2024-12-06 07:04:57.188770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.681 ms 00:38:24.658 [2024-12-06 07:04:57.188786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.658 [2024-12-06 07:04:57.215915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.658 [2024-12-06 07:04:57.215973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:24.658 [2024-12-06 07:04:57.215989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.082 ms 00:38:24.658 [2024-12-06 07:04:57.216001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.658 [2024-12-06 07:04:57.243589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.658 [2024-12-06 07:04:57.243648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:24.658 [2024-12-06 07:04:57.243665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.546 ms 00:38:24.658 [2024-12-06 07:04:57.243677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.658 [2024-12-06 07:04:57.243753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.658 [2024-12-06 07:04:57.243777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:24.658 [2024-12-06 07:04:57.243806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:24.658 [2024-12-06 07:04:57.243818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.658 [2024-12-06 07:04:57.243930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.658 [2024-12-06 07:04:57.243952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:24.658 [2024-12-06 07:04:57.243964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:38:24.658 [2024-12-06 07:04:57.243976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.658 [2024-12-06 07:04:57.245093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2828.748 ms, result 0 00:38:24.916 { 00:38:24.916 "name": "ftl0", 00:38:24.916 "uuid": "13fdb5a6-dc94-4775-aa4b-999dc35951ec" 00:38:24.916 } 00:38:24.916 07:04:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:38:24.916 07:04:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:38:24.916 07:04:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:38:25.174 07:04:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:38:25.174 [2024-12-06 07:04:57.685196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:25.174 I/O size of 69632 is greater than zero copy threshold (65536). 00:38:25.174 Zero copy mechanism will not be used. 00:38:25.174 Running I/O for 4 seconds... 00:38:27.486 1672.00 IOPS, 111.03 MiB/s [2024-12-06T07:05:01.013Z] 1695.50 IOPS, 112.59 MiB/s [2024-12-06T07:05:01.947Z] 1703.67 IOPS, 113.13 MiB/s [2024-12-06T07:05:01.947Z] 1698.50 IOPS, 112.79 MiB/s 00:38:29.356 Latency(us) 00:38:29.356 [2024-12-06T07:05:01.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.356 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:38:29.356 ftl0 : 4.00 1697.88 112.75 0.00 0.00 618.75 251.35 2129.92 00:38:29.356 [2024-12-06T07:05:01.947Z] =================================================================================================================== 00:38:29.356 [2024-12-06T07:05:01.947Z] Total : 1697.88 112.75 0.00 0.00 618.75 251.35 2129.92 00:38:29.356 [2024-12-06 07:05:01.695306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:29.356 { 00:38:29.356 "results": [ 00:38:29.356 { 00:38:29.356 "job": "ftl0", 00:38:29.356 "core_mask": "0x1", 00:38:29.356 "workload": "randwrite", 00:38:29.356 "status": "finished", 00:38:29.356 "queue_depth": 1, 00:38:29.356 "io_size": 69632, 00:38:29.356 "runtime": 4.00206, 00:38:29.356 "iops": 1697.8755940690544, 00:38:29.356 "mibps": 112.74955116864814, 00:38:29.356 "io_failed": 0, 00:38:29.356 "io_timeout": 0, 00:38:29.356 "avg_latency_us": 618.7535583651081, 00:38:29.356 "min_latency_us": 251.34545454545454, 00:38:29.356 "max_latency_us": 2129.92 00:38:29.356 } 00:38:29.356 ], 00:38:29.356 "core_count": 1 00:38:29.356 } 00:38:29.356 07:05:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:38:29.356 [2024-12-06 07:05:01.851178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:29.356 Running I/O for 4 seconds... 00:38:31.672 7779.00 IOPS, 30.39 MiB/s [2024-12-06T07:05:05.199Z] 7368.00 IOPS, 28.78 MiB/s [2024-12-06T07:05:06.132Z] 7089.00 IOPS, 27.69 MiB/s [2024-12-06T07:05:06.133Z] 6950.50 IOPS, 27.15 MiB/s 00:38:33.542 Latency(us) 00:38:33.542 [2024-12-06T07:05:06.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.542 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.542 ftl0 : 4.02 6944.97 27.13 0.00 0.00 18378.47 309.06 33363.78 00:38:33.542 [2024-12-06T07:05:06.133Z] =================================================================================================================== 00:38:33.542 [2024-12-06T07:05:06.133Z] Total : 6944.97 27.13 0.00 0.00 18378.47 0.00 33363.78 00:38:33.542 [2024-12-06 07:05:05.880433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:33.542 { 00:38:33.542 "results": [ 00:38:33.542 { 00:38:33.542 "job": "ftl0", 00:38:33.542 "core_mask": "0x1", 00:38:33.542 "workload": "randwrite", 00:38:33.542 "status": "finished", 00:38:33.542 "queue_depth": 128, 00:38:33.542 "io_size": 4096, 00:38:33.542 "runtime": 4.02075, 00:38:33.542 "iops": 6944.972952807312, 00:38:33.542 "mibps": 27.128800596903563, 00:38:33.542 "io_failed": 0, 00:38:33.542 "io_timeout": 0, 00:38:33.542 "avg_latency_us": 18378.46769334948, 00:38:33.542 "min_latency_us": 309.0618181818182, 00:38:33.542 "max_latency_us": 33363.781818181815 00:38:33.542 } 00:38:33.542 ], 00:38:33.542 "core_count": 1 00:38:33.542 } 00:38:33.542 07:05:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:38:33.542 [2024-12-06 07:05:06.028869] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:33.542 Running I/O for 4 seconds... 00:38:35.849 4644.00 IOPS, 18.14 MiB/s [2024-12-06T07:05:09.374Z] 4637.00 IOPS, 18.11 MiB/s [2024-12-06T07:05:10.309Z] 4645.00 IOPS, 18.14 MiB/s [2024-12-06T07:05:10.309Z] 4642.50 IOPS, 18.13 MiB/s 00:38:37.718 Latency(us) 00:38:37.718 [2024-12-06T07:05:10.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.718 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:37.718 Verification LBA range: start 0x0 length 0x1400000 00:38:37.718 ftl0 : 4.02 4653.82 18.18 0.00 0.00 27392.65 407.74 29431.62 00:38:37.718 [2024-12-06T07:05:10.309Z] =================================================================================================================== 00:38:37.718 [2024-12-06T07:05:10.309Z] Total : 4653.82 18.18 0.00 0.00 27392.65 0.00 29431.62 00:38:37.718 [2024-12-06 07:05:10.061483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:37.718 { 00:38:37.718 "results": [ 00:38:37.718 { 00:38:37.718 "job": "ftl0", 00:38:37.718 "core_mask": "0x1", 00:38:37.718 "workload": "verify", 00:38:37.718 "status": "finished", 00:38:37.718 "verify_range": { 00:38:37.718 "start": 0, 00:38:37.718 "length": 20971520 00:38:37.718 }, 00:38:37.718 "queue_depth": 128, 00:38:37.718 "io_size": 4096, 00:38:37.718 "runtime": 4.017777, 00:38:37.718 "iops": 4653.817272586309, 00:38:37.718 "mibps": 18.17897372104027, 00:38:37.718 "io_failed": 0, 00:38:37.718 "io_timeout": 0, 00:38:37.718 "avg_latency_us": 27392.64657804918, 00:38:37.718 "min_latency_us": 407.73818181818183, 00:38:37.718 "max_latency_us": 29431.62181818182 00:38:37.718 } 00:38:37.718 ], 00:38:37.718 "core_count": 1 00:38:37.718 } 00:38:37.718 07:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:38:37.977 [2024-12-06 07:05:10.357053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.357121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:37.977 [2024-12-06 07:05:10.357171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:37.977 [2024-12-06 07:05:10.357183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.977 [2024-12-06 07:05:10.357208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:37.977 [2024-12-06 07:05:10.360006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.360256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:37.977 [2024-12-06 07:05:10.360306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:38:37.977 [2024-12-06 07:05:10.360319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.977 [2024-12-06 07:05:10.362072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.362158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:37.977 [2024-12-06 07:05:10.362178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:38:37.977 [2024-12-06 07:05:10.362187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.977 [2024-12-06 07:05:10.531667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.531932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:37.977 [2024-12-06 07:05:10.531970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 169.442 ms 00:38:37.977 [2024-12-06 07:05:10.531984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.977 [2024-12-06 07:05:10.537614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.537645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:37.977 [2024-12-06 07:05:10.537660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.579 ms 00:38:37.977 [2024-12-06 07:05:10.537672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.977 [2024-12-06 07:05:10.562415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.977 [2024-12-06 07:05:10.562454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:37.977 [2024-12-06 07:05:10.562503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.663 ms 00:38:37.977 [2024-12-06 07:05:10.562513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.579083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.579260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:38.237 [2024-12-06 07:05:10.579293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.510 ms 00:38:38.237 [2024-12-06 07:05:10.579305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.579465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.579485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:38.237 [2024-12-06 07:05:10.579501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:38:38.237 [2024-12-06 07:05:10.579512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.604796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.604833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:38.237 [2024-12-06 07:05:10.604851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.260 ms 00:38:38.237 [2024-12-06 07:05:10.604861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.629435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.629474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:38.237 [2024-12-06 07:05:10.629491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.532 ms 00:38:38.237 [2024-12-06 07:05:10.629501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.653510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.653548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:38.237 [2024-12-06 07:05:10.653566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.970 ms 00:38:38.237 [2024-12-06 07:05:10.653576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.677659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.237 [2024-12-06 07:05:10.677698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:38.237 [2024-12-06 07:05:10.677747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.006 ms 00:38:38.237 [2024-12-06 07:05:10.677758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.237 [2024-12-06 07:05:10.677801] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:38.237 [2024-12-06 07:05:10.677837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.677996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:38.237 [2024-12-06 07:05:10.678234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.678989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:38.238 [2024-12-06 07:05:10.679122] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:38.238 [2024-12-06 07:05:10.679135] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fdb5a6-dc94-4775-aa4b-999dc35951ec 00:38:38.238 [2024-12-06 07:05:10.679148] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:38.238 [2024-12-06 07:05:10.679160] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:38.238 [2024-12-06 07:05:10.679169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:38.238 [2024-12-06 07:05:10.679181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:38.238 [2024-12-06 07:05:10.679192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:38.238 [2024-12-06 07:05:10.679203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:38.238 [2024-12-06 07:05:10.679213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:38.238 [2024-12-06 07:05:10.679226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:38.238 [2024-12-06 07:05:10.679235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:38.238 [2024-12-06 07:05:10.679247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.238 [2024-12-06 07:05:10.679257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:38.238 [2024-12-06 07:05:10.679270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:38:38.238 [2024-12-06 07:05:10.679281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.238 [2024-12-06 07:05:10.692879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.238 [2024-12-06 07:05:10.692914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:38.238 [2024-12-06 07:05:10.692930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.554 ms 00:38:38.238 [2024-12-06 07:05:10.692941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.238 [2024-12-06 07:05:10.693290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:38.238 [2024-12-06 07:05:10.693305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:38.238 [2024-12-06 07:05:10.693317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:38:38.239 [2024-12-06 07:05:10.693326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.239 [2024-12-06 07:05:10.728732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.239 [2024-12-06 07:05:10.728770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:38.239 [2024-12-06 07:05:10.728788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.239 [2024-12-06 07:05:10.728798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.239 [2024-12-06 07:05:10.728848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.239 [2024-12-06 07:05:10.728861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:38.239 [2024-12-06 07:05:10.728872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.239 [2024-12-06 07:05:10.728881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.239 [2024-12-06 07:05:10.728975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.239 [2024-12-06 07:05:10.728993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:38.239 [2024-12-06 07:05:10.729006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.239 [2024-12-06 07:05:10.729015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.239 [2024-12-06 07:05:10.729038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.239 [2024-12-06 07:05:10.729050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:38.239 [2024-12-06 07:05:10.729061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.239 [2024-12-06 07:05:10.729070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.239 [2024-12-06 07:05:10.814318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.239 [2024-12-06 07:05:10.814368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:38.239 [2024-12-06 07:05:10.814407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.239 [2024-12-06 07:05:10.814417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.498 [2024-12-06 07:05:10.884539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.498 [2024-12-06 07:05:10.884850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:38.498 [2024-12-06 07:05:10.884884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.498 [2024-12-06 07:05:10.884898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.498 [2024-12-06 07:05:10.885033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.498 [2024-12-06 07:05:10.885053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:38.499 [2024-12-06 07:05:10.885068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.499 [2024-12-06 07:05:10.885172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:38.499 [2024-12-06 07:05:10.885215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.499 [2024-12-06 07:05:10.885388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:38.499 [2024-12-06 07:05:10.885403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.499 [2024-12-06 07:05:10.885474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:38.499 [2024-12-06 07:05:10.885486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.499 [2024-12-06 07:05:10.885551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:38.499 [2024-12-06 07:05:10.885563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:38.499 [2024-12-06 07:05:10.885648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:38.499 [2024-12-06 07:05:10.885660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:38.499 [2024-12-06 07:05:10.885669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:38.499 [2024-12-06 07:05:10.885817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.712 ms, result 0 00:38:38.499 true 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77573 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77573 ']' 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77573 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77573 00:38:38.499 killing process with pid 77573 00:38:38.499 Received shutdown signal, test time was about 4.000000 seconds 00:38:38.499 00:38:38.499 Latency(us) 00:38:38.499 [2024-12-06T07:05:11.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.499 [2024-12-06T07:05:11.090Z] =================================================================================================================== 00:38:38.499 [2024-12-06T07:05:11.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77573' 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77573 00:38:38.499 07:05:10 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77573 00:38:39.437 Remove shared memory files 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:38:39.437 ************************************ 00:38:39.437 END TEST ftl_bdevperf 00:38:39.437 ************************************ 00:38:39.437 00:38:39.437 real 0m22.384s 00:38:39.437 user 0m26.250s 00:38:39.437 sys 0m0.985s 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.437 07:05:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:39.437 07:05:11 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:38:39.437 07:05:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:39.437 07:05:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.437 07:05:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:39.437 ************************************ 00:38:39.437 START TEST ftl_trim 00:38:39.437 ************************************ 00:38:39.437 07:05:11 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:38:39.437 * Looking for test storage... 00:38:39.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:39.437 07:05:11 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:39.437 07:05:11 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:38:39.437 07:05:11 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.697 07:05:12 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:39.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.697 --rc genhtml_branch_coverage=1 00:38:39.697 --rc genhtml_function_coverage=1 00:38:39.697 --rc genhtml_legend=1 00:38:39.697 --rc geninfo_all_blocks=1 00:38:39.697 --rc geninfo_unexecuted_blocks=1 00:38:39.697 00:38:39.697 ' 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:39.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.697 --rc genhtml_branch_coverage=1 00:38:39.697 --rc genhtml_function_coverage=1 00:38:39.697 --rc genhtml_legend=1 00:38:39.697 --rc geninfo_all_blocks=1 00:38:39.697 --rc geninfo_unexecuted_blocks=1 00:38:39.697 00:38:39.697 ' 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:39.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.697 --rc genhtml_branch_coverage=1 00:38:39.697 --rc genhtml_function_coverage=1 00:38:39.697 --rc genhtml_legend=1 00:38:39.697 --rc geninfo_all_blocks=1 00:38:39.697 --rc geninfo_unexecuted_blocks=1 00:38:39.697 00:38:39.697 ' 00:38:39.697 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:39.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.697 --rc genhtml_branch_coverage=1 00:38:39.697 --rc genhtml_function_coverage=1 00:38:39.697 --rc genhtml_legend=1 00:38:39.697 --rc geninfo_all_blocks=1 00:38:39.697 --rc geninfo_unexecuted_blocks=1 00:38:39.697 00:38:39.697 ' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:38:39.697 07:05:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=77913 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 77913 00:38:39.698 07:05:12 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77913 ']' 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:39.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:39.698 07:05:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:38:39.698 [2024-12-06 07:05:12.216934] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:38:39.698 [2024-12-06 07:05:12.217094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77913 ] 00:38:39.957 [2024-12-06 07:05:12.395804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:39.957 [2024-12-06 07:05:12.477753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:39.957 [2024-12-06 07:05:12.477835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.957 [2024-12-06 07:05:12.477852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:40.893 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:40.893 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:38:40.893 07:05:13 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:41.152 07:05:13 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:41.152 07:05:13 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:38:41.152 07:05:13 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:41.152 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:38:41.152 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:41.152 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:41.152 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:41.152 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:41.430 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:41.430 { 00:38:41.430 "name": "nvme0n1", 00:38:41.430 "aliases": [ 00:38:41.430 "779e7ce9-65aa-4a13-812e-0349b22749fa" 00:38:41.430 ], 00:38:41.430 "product_name": "NVMe disk", 00:38:41.430 "block_size": 4096, 00:38:41.430 "num_blocks": 1310720, 00:38:41.430 "uuid": "779e7ce9-65aa-4a13-812e-0349b22749fa", 00:38:41.430 "numa_id": -1, 00:38:41.430 "assigned_rate_limits": { 00:38:41.430 "rw_ios_per_sec": 0, 00:38:41.430 "rw_mbytes_per_sec": 0, 00:38:41.430 "r_mbytes_per_sec": 0, 00:38:41.430 "w_mbytes_per_sec": 0 00:38:41.430 }, 00:38:41.430 "claimed": true, 00:38:41.430 "claim_type": "read_many_write_one", 00:38:41.430 "zoned": false, 00:38:41.430 "supported_io_types": { 00:38:41.430 "read": true, 00:38:41.430 "write": true, 00:38:41.430 "unmap": true, 00:38:41.430 "flush": true, 00:38:41.430 "reset": true, 00:38:41.430 "nvme_admin": true, 00:38:41.430 "nvme_io": true, 00:38:41.430 "nvme_io_md": false, 00:38:41.430 "write_zeroes": true, 00:38:41.430 "zcopy": false, 00:38:41.430 "get_zone_info": false, 00:38:41.430 "zone_management": false, 00:38:41.430 "zone_append": false, 00:38:41.430 "compare": true, 00:38:41.430 "compare_and_write": false, 00:38:41.430 "abort": true, 00:38:41.430 "seek_hole": false, 00:38:41.430 "seek_data": false, 00:38:41.430 "copy": true, 00:38:41.431 "nvme_iov_md": false 00:38:41.431 }, 00:38:41.431 "driver_specific": { 00:38:41.431 "nvme": [ 00:38:41.431 { 00:38:41.431 "pci_address": "0000:00:11.0", 00:38:41.431 "trid": { 00:38:41.431 "trtype": "PCIe", 00:38:41.431 "traddr": "0000:00:11.0" 00:38:41.431 }, 00:38:41.431 "ctrlr_data": { 00:38:41.431 "cntlid": 0, 00:38:41.431 "vendor_id": "0x1b36", 00:38:41.431 "model_number": "QEMU NVMe Ctrl", 00:38:41.431 "serial_number": "12341", 00:38:41.431 "firmware_revision": "8.0.0", 00:38:41.431 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:41.431 "oacs": { 00:38:41.431 "security": 0, 00:38:41.431 "format": 1, 00:38:41.431 "firmware": 0, 00:38:41.431 "ns_manage": 1 00:38:41.431 }, 00:38:41.431 "multi_ctrlr": false, 00:38:41.431 "ana_reporting": false 00:38:41.431 }, 00:38:41.431 "vs": { 00:38:41.431 "nvme_version": "1.4" 00:38:41.431 }, 00:38:41.431 "ns_data": { 00:38:41.431 "id": 1, 00:38:41.431 "can_share": false 00:38:41.431 } 00:38:41.431 } 00:38:41.431 ], 00:38:41.431 "mp_policy": "active_passive" 00:38:41.431 } 00:38:41.431 } 00:38:41.431 ]' 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:38:41.431 07:05:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:38:41.431 07:05:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:38:41.431 07:05:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:41.431 07:05:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:38:41.431 07:05:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:41.431 07:05:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:41.699 07:05:14 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=79ac54ce-ed33-42cb-8654-ec42f5893485 00:38:41.699 07:05:14 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:38:41.699 07:05:14 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79ac54ce-ed33-42cb-8654-ec42f5893485 00:38:41.957 07:05:14 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:42.216 07:05:14 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=57f8b668-eb7a-41a6-a72a-abc20374f8aa 00:38:42.216 07:05:14 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 57f8b668-eb7a-41a6-a72a-abc20374f8aa 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:38:42.474 07:05:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.474 07:05:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.474 07:05:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:42.474 07:05:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:42.474 07:05:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:42.474 07:05:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:42.733 { 00:38:42.733 "name": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:42.733 "aliases": [ 00:38:42.733 "lvs/nvme0n1p0" 00:38:42.733 ], 00:38:42.733 "product_name": "Logical Volume", 00:38:42.733 "block_size": 4096, 00:38:42.733 "num_blocks": 26476544, 00:38:42.733 "uuid": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:42.733 "assigned_rate_limits": { 00:38:42.733 "rw_ios_per_sec": 0, 00:38:42.733 "rw_mbytes_per_sec": 0, 00:38:42.733 "r_mbytes_per_sec": 0, 00:38:42.733 "w_mbytes_per_sec": 0 00:38:42.733 }, 00:38:42.733 "claimed": false, 00:38:42.733 "zoned": false, 00:38:42.733 "supported_io_types": { 00:38:42.733 "read": true, 00:38:42.733 "write": true, 00:38:42.733 "unmap": true, 00:38:42.733 "flush": false, 00:38:42.733 "reset": true, 00:38:42.733 "nvme_admin": false, 00:38:42.733 "nvme_io": false, 00:38:42.733 "nvme_io_md": false, 00:38:42.733 "write_zeroes": true, 00:38:42.733 "zcopy": false, 00:38:42.733 "get_zone_info": false, 00:38:42.733 "zone_management": false, 00:38:42.733 "zone_append": false, 00:38:42.733 "compare": false, 00:38:42.733 "compare_and_write": false, 00:38:42.733 "abort": false, 00:38:42.733 "seek_hole": true, 00:38:42.733 "seek_data": true, 00:38:42.733 "copy": false, 00:38:42.733 "nvme_iov_md": false 00:38:42.733 }, 00:38:42.733 "driver_specific": { 00:38:42.733 "lvol": { 00:38:42.733 "lvol_store_uuid": "57f8b668-eb7a-41a6-a72a-abc20374f8aa", 00:38:42.733 "base_bdev": "nvme0n1", 00:38:42.733 "thin_provision": true, 00:38:42.733 "num_allocated_clusters": 0, 00:38:42.733 "snapshot": false, 00:38:42.733 "clone": false, 00:38:42.733 "esnap_clone": false 00:38:42.733 } 00:38:42.733 } 00:38:42.733 } 00:38:42.733 ]' 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:42.733 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:42.733 07:05:15 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:38:42.733 07:05:15 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:38:42.733 07:05:15 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:42.991 07:05:15 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:42.991 07:05:15 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:42.991 07:05:15 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.991 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:42.991 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:42.991 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:42.991 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:42.991 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:43.248 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:43.248 { 00:38:43.249 "name": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:43.249 "aliases": [ 00:38:43.249 "lvs/nvme0n1p0" 00:38:43.249 ], 00:38:43.249 "product_name": "Logical Volume", 00:38:43.249 "block_size": 4096, 00:38:43.249 "num_blocks": 26476544, 00:38:43.249 "uuid": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:43.249 "assigned_rate_limits": { 00:38:43.249 "rw_ios_per_sec": 0, 00:38:43.249 "rw_mbytes_per_sec": 0, 00:38:43.249 "r_mbytes_per_sec": 0, 00:38:43.249 "w_mbytes_per_sec": 0 00:38:43.249 }, 00:38:43.249 "claimed": false, 00:38:43.249 "zoned": false, 00:38:43.249 "supported_io_types": { 00:38:43.249 "read": true, 00:38:43.249 "write": true, 00:38:43.249 "unmap": true, 00:38:43.249 "flush": false, 00:38:43.249 "reset": true, 00:38:43.249 "nvme_admin": false, 00:38:43.249 "nvme_io": false, 00:38:43.249 "nvme_io_md": false, 00:38:43.249 "write_zeroes": true, 00:38:43.249 "zcopy": false, 00:38:43.249 "get_zone_info": false, 00:38:43.249 "zone_management": false, 00:38:43.249 "zone_append": false, 00:38:43.249 "compare": false, 00:38:43.249 "compare_and_write": false, 00:38:43.249 "abort": false, 00:38:43.249 "seek_hole": true, 00:38:43.249 "seek_data": true, 00:38:43.249 "copy": false, 00:38:43.249 "nvme_iov_md": false 00:38:43.249 }, 00:38:43.249 "driver_specific": { 00:38:43.249 "lvol": { 00:38:43.249 "lvol_store_uuid": "57f8b668-eb7a-41a6-a72a-abc20374f8aa", 00:38:43.249 "base_bdev": "nvme0n1", 00:38:43.249 "thin_provision": true, 00:38:43.249 "num_allocated_clusters": 0, 00:38:43.249 "snapshot": false, 00:38:43.249 "clone": false, 00:38:43.249 "esnap_clone": false 00:38:43.249 } 00:38:43.249 } 00:38:43.249 } 00:38:43.249 ]' 00:38:43.249 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:43.506 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:43.506 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:43.506 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:43.506 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:43.506 07:05:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:43.506 07:05:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:38:43.506 07:05:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:43.764 07:05:16 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:38:43.764 07:05:16 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:38:43.764 07:05:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:43.764 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:43.764 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:43.764 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:43.764 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:43.764 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09c83529-f0ea-494e-8e55-d73a0ed720c6 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:44.021 { 00:38:44.021 "name": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:44.021 "aliases": [ 00:38:44.021 "lvs/nvme0n1p0" 00:38:44.021 ], 00:38:44.021 "product_name": "Logical Volume", 00:38:44.021 "block_size": 4096, 00:38:44.021 "num_blocks": 26476544, 00:38:44.021 "uuid": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:44.021 "assigned_rate_limits": { 00:38:44.021 "rw_ios_per_sec": 0, 00:38:44.021 "rw_mbytes_per_sec": 0, 00:38:44.021 "r_mbytes_per_sec": 0, 00:38:44.021 "w_mbytes_per_sec": 0 00:38:44.021 }, 00:38:44.021 "claimed": false, 00:38:44.021 "zoned": false, 00:38:44.021 "supported_io_types": { 00:38:44.021 "read": true, 00:38:44.021 "write": true, 00:38:44.021 "unmap": true, 00:38:44.021 "flush": false, 00:38:44.021 "reset": true, 00:38:44.021 "nvme_admin": false, 00:38:44.021 "nvme_io": false, 00:38:44.021 "nvme_io_md": false, 00:38:44.021 "write_zeroes": true, 00:38:44.021 "zcopy": false, 00:38:44.021 "get_zone_info": false, 00:38:44.021 "zone_management": false, 00:38:44.021 "zone_append": false, 00:38:44.021 "compare": false, 00:38:44.021 "compare_and_write": false, 00:38:44.021 "abort": false, 00:38:44.021 "seek_hole": true, 00:38:44.021 "seek_data": true, 00:38:44.021 "copy": false, 00:38:44.021 "nvme_iov_md": false 00:38:44.021 }, 00:38:44.021 "driver_specific": { 00:38:44.021 "lvol": { 00:38:44.021 "lvol_store_uuid": "57f8b668-eb7a-41a6-a72a-abc20374f8aa", 00:38:44.021 "base_bdev": "nvme0n1", 00:38:44.021 "thin_provision": true, 00:38:44.021 "num_allocated_clusters": 0, 00:38:44.021 "snapshot": false, 00:38:44.021 "clone": false, 00:38:44.021 "esnap_clone": false 00:38:44.021 } 00:38:44.021 } 00:38:44.021 } 00:38:44.021 ]' 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:44.021 07:05:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:44.021 07:05:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:38:44.021 07:05:16 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 09c83529-f0ea-494e-8e55-d73a0ed720c6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:38:44.280 [2024-12-06 07:05:16.738580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.738631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:44.280 [2024-12-06 07:05:16.738671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:44.280 [2024-12-06 07:05:16.738683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.742061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.742256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:44.280 [2024-12-06 07:05:16.742307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:38:44.280 [2024-12-06 07:05:16.742321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.742585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:44.280 [2024-12-06 07:05:16.743558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:44.280 [2024-12-06 07:05:16.743618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.743648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:44.280 [2024-12-06 07:05:16.743662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:38:44.280 [2024-12-06 07:05:16.743673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.743874] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:38:44.280 [2024-12-06 07:05:16.745076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.745117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:44.280 [2024-12-06 07:05:16.745151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:38:44.280 [2024-12-06 07:05:16.745164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.749786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.749850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:44.280 [2024-12-06 07:05:16.749880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.540 ms 00:38:44.280 [2024-12-06 07:05:16.749894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.750043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.750066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:44.280 [2024-12-06 07:05:16.750079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:38:44.280 [2024-12-06 07:05:16.750096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.750139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.750156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:44.280 [2024-12-06 07:05:16.750168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:44.280 [2024-12-06 07:05:16.750184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.750224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:44.280 [2024-12-06 07:05:16.754232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.754272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:44.280 [2024-12-06 07:05:16.754308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:38:44.280 [2024-12-06 07:05:16.754319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.754418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.280 [2024-12-06 07:05:16.754453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:44.280 [2024-12-06 07:05:16.754469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:44.280 [2024-12-06 07:05:16.754479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.280 [2024-12-06 07:05:16.754517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:44.280 [2024-12-06 07:05:16.754657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:44.280 [2024-12-06 07:05:16.754685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:44.280 [2024-12-06 07:05:16.754701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:44.281 [2024-12-06 07:05:16.754780] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:44.281 [2024-12-06 07:05:16.754810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:44.281 [2024-12-06 07:05:16.754827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:44.281 [2024-12-06 07:05:16.754838] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:44.281 [2024-12-06 07:05:16.754851] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:44.281 [2024-12-06 07:05:16.754864] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:44.281 [2024-12-06 07:05:16.754878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.281 [2024-12-06 07:05:16.754889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:44.281 [2024-12-06 07:05:16.754903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:38:44.281 [2024-12-06 07:05:16.754915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.281 [2024-12-06 07:05:16.755023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.281 [2024-12-06 07:05:16.755044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:44.281 [2024-12-06 07:05:16.755060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:38:44.281 [2024-12-06 07:05:16.755071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.281 [2024-12-06 07:05:16.755220] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:44.281 [2024-12-06 07:05:16.755237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:44.281 [2024-12-06 07:05:16.755252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:44.281 [2024-12-06 07:05:16.755289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:44.281 [2024-12-06 07:05:16.755326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:44.281 [2024-12-06 07:05:16.755351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:44.281 [2024-12-06 07:05:16.755362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:44.281 [2024-12-06 07:05:16.755374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:44.281 [2024-12-06 07:05:16.755400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:44.281 [2024-12-06 07:05:16.755413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:44.281 [2024-12-06 07:05:16.755423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:44.281 [2024-12-06 07:05:16.755448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:44.281 [2024-12-06 07:05:16.755483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:44.281 [2024-12-06 07:05:16.755515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:44.281 [2024-12-06 07:05:16.755550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:44.281 [2024-12-06 07:05:16.755583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:44.281 [2024-12-06 07:05:16.755634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:44.281 [2024-12-06 07:05:16.755671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:44.281 [2024-12-06 07:05:16.755680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:44.281 [2024-12-06 07:05:16.755693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:44.281 [2024-12-06 07:05:16.755704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:44.281 [2024-12-06 07:05:16.755733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:44.281 [2024-12-06 07:05:16.755743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:44.281 [2024-12-06 07:05:16.755765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:44.281 [2024-12-06 07:05:16.755777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:44.281 [2024-12-06 07:05:16.755800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:44.281 [2024-12-06 07:05:16.755811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:44.281 [2024-12-06 07:05:16.755871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:44.281 [2024-12-06 07:05:16.755886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:44.281 [2024-12-06 07:05:16.755896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:44.281 [2024-12-06 07:05:16.755909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:44.281 [2024-12-06 07:05:16.755919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:44.281 [2024-12-06 07:05:16.755931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:44.281 [2024-12-06 07:05:16.755942] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:44.281 [2024-12-06 07:05:16.755957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.755971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:44.281 [2024-12-06 07:05:16.755984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:44.281 [2024-12-06 07:05:16.755994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:44.281 [2024-12-06 07:05:16.756006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:44.281 [2024-12-06 07:05:16.756016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:44.281 [2024-12-06 07:05:16.756028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:44.281 [2024-12-06 07:05:16.756038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:44.281 [2024-12-06 07:05:16.756084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:44.281 [2024-12-06 07:05:16.756095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:44.281 [2024-12-06 07:05:16.756110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.756121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.756134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.756161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.756175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:44.281 [2024-12-06 07:05:16.756186] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:44.281 [2024-12-06 07:05:16.756205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:44.281 [2024-12-06 07:05:16.756218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:44.282 [2024-12-06 07:05:16.756258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:44.282 [2024-12-06 07:05:16.756280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:44.282 [2024-12-06 07:05:16.756294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:44.282 [2024-12-06 07:05:16.756307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:44.282 [2024-12-06 07:05:16.756321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:44.282 [2024-12-06 07:05:16.756333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:38:44.282 [2024-12-06 07:05:16.756347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:44.282 [2024-12-06 07:05:16.756438] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:44.282 [2024-12-06 07:05:16.756461] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:46.819 [2024-12-06 07:05:19.100600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.100705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:46.819 [2024-12-06 07:05:19.100749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2344.176 ms 00:38:46.819 [2024-12-06 07:05:19.100766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.128461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.128545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:46.819 [2024-12-06 07:05:19.128582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.271 ms 00:38:46.819 [2024-12-06 07:05:19.128599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.128816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.128847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:46.819 [2024-12-06 07:05:19.128899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:38:46.819 [2024-12-06 07:05:19.128916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.176837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.176908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:46.819 [2024-12-06 07:05:19.176927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.880 ms 00:38:46.819 [2024-12-06 07:05:19.176942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.177080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.177103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:46.819 [2024-12-06 07:05:19.177116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:46.819 [2024-12-06 07:05:19.177144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.177469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.177509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:46.819 [2024-12-06 07:05:19.177538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:38:46.819 [2024-12-06 07:05:19.177551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.177697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.177714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:46.819 [2024-12-06 07:05:19.177760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:38:46.819 [2024-12-06 07:05:19.177777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.193870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.193931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:46.819 [2024-12-06 07:05:19.193949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.057 ms 00:38:46.819 [2024-12-06 07:05:19.193962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.205491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:46.819 [2024-12-06 07:05:19.218492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.218553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:46.819 [2024-12-06 07:05:19.218590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.394 ms 00:38:46.819 [2024-12-06 07:05:19.218602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.285200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.285256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:46.819 [2024-12-06 07:05:19.285294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.452 ms 00:38:46.819 [2024-12-06 07:05:19.285305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.285566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.285604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:46.819 [2024-12-06 07:05:19.285622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:38:46.819 [2024-12-06 07:05:19.285634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.314024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.314065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:46.819 [2024-12-06 07:05:19.314100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.346 ms 00:38:46.819 [2024-12-06 07:05:19.314111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.340654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.340693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:46.819 [2024-12-06 07:05:19.340760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.447 ms 00:38:46.819 [2024-12-06 07:05:19.340773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:46.819 [2024-12-06 07:05:19.341642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:46.819 [2024-12-06 07:05:19.341675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:46.819 [2024-12-06 07:05:19.341709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:38:46.819 [2024-12-06 07:05:19.341731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.418897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.418951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:47.078 [2024-12-06 07:05:19.418990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.124 ms 00:38:47.078 [2024-12-06 07:05:19.419003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.449548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.449592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:47.078 [2024-12-06 07:05:19.449631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.406 ms 00:38:47.078 [2024-12-06 07:05:19.449644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.479559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.479599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:47.078 [2024-12-06 07:05:19.479635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.769 ms 00:38:47.078 [2024-12-06 07:05:19.479646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.507946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.508193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:47.078 [2024-12-06 07:05:19.508254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.139 ms 00:38:47.078 [2024-12-06 07:05:19.508270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.508385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.508409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:47.078 [2024-12-06 07:05:19.508429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:47.078 [2024-12-06 07:05:19.508442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.508571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:47.078 [2024-12-06 07:05:19.508588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:47.078 [2024-12-06 07:05:19.508603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:38:47.078 [2024-12-06 07:05:19.508614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:47.078 [2024-12-06 07:05:19.509680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:47.078 [2024-12-06 07:05:19.513443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2770.735 ms, result 0 00:38:47.078 [2024-12-06 07:05:19.514456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:47.078 { 00:38:47.078 "name": "ftl0", 00:38:47.078 "uuid": "a47c3814-54f8-4eb2-8588-0c95ee6f413a" 00:38:47.078 } 00:38:47.078 07:05:19 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.078 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:47.337 07:05:19 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:38:47.595 [ 00:38:47.595 { 00:38:47.595 "name": "ftl0", 00:38:47.595 "aliases": [ 00:38:47.595 "a47c3814-54f8-4eb2-8588-0c95ee6f413a" 00:38:47.595 ], 00:38:47.595 "product_name": "FTL disk", 00:38:47.595 "block_size": 4096, 00:38:47.595 "num_blocks": 23592960, 00:38:47.595 "uuid": "a47c3814-54f8-4eb2-8588-0c95ee6f413a", 00:38:47.595 "assigned_rate_limits": { 00:38:47.595 "rw_ios_per_sec": 0, 00:38:47.595 "rw_mbytes_per_sec": 0, 00:38:47.595 "r_mbytes_per_sec": 0, 00:38:47.595 "w_mbytes_per_sec": 0 00:38:47.595 }, 00:38:47.595 "claimed": false, 00:38:47.595 "zoned": false, 00:38:47.595 "supported_io_types": { 00:38:47.595 "read": true, 00:38:47.595 "write": true, 00:38:47.595 "unmap": true, 00:38:47.595 "flush": true, 00:38:47.595 "reset": false, 00:38:47.595 "nvme_admin": false, 00:38:47.595 "nvme_io": false, 00:38:47.595 "nvme_io_md": false, 00:38:47.595 "write_zeroes": true, 00:38:47.595 "zcopy": false, 00:38:47.595 "get_zone_info": false, 00:38:47.595 "zone_management": false, 00:38:47.595 "zone_append": false, 00:38:47.595 "compare": false, 00:38:47.595 "compare_and_write": false, 00:38:47.595 "abort": false, 00:38:47.595 "seek_hole": false, 00:38:47.595 "seek_data": false, 00:38:47.595 "copy": false, 00:38:47.595 "nvme_iov_md": false 00:38:47.595 }, 00:38:47.595 "driver_specific": { 00:38:47.595 "ftl": { 00:38:47.595 "base_bdev": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:47.595 "cache": "nvc0n1p0" 00:38:47.595 } 00:38:47.595 } 00:38:47.595 } 00:38:47.595 ] 00:38:47.595 07:05:20 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:38:47.595 07:05:20 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:38:47.595 07:05:20 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:47.854 07:05:20 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:38:47.854 07:05:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:38:48.112 07:05:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:38:48.112 { 00:38:48.112 "name": "ftl0", 00:38:48.112 "aliases": [ 00:38:48.112 "a47c3814-54f8-4eb2-8588-0c95ee6f413a" 00:38:48.112 ], 00:38:48.112 "product_name": "FTL disk", 00:38:48.112 "block_size": 4096, 00:38:48.112 "num_blocks": 23592960, 00:38:48.112 "uuid": "a47c3814-54f8-4eb2-8588-0c95ee6f413a", 00:38:48.112 "assigned_rate_limits": { 00:38:48.112 "rw_ios_per_sec": 0, 00:38:48.112 "rw_mbytes_per_sec": 0, 00:38:48.112 "r_mbytes_per_sec": 0, 00:38:48.112 "w_mbytes_per_sec": 0 00:38:48.112 }, 00:38:48.112 "claimed": false, 00:38:48.112 "zoned": false, 00:38:48.112 "supported_io_types": { 00:38:48.112 "read": true, 00:38:48.112 "write": true, 00:38:48.112 "unmap": true, 00:38:48.112 "flush": true, 00:38:48.112 "reset": false, 00:38:48.112 "nvme_admin": false, 00:38:48.112 "nvme_io": false, 00:38:48.112 "nvme_io_md": false, 00:38:48.112 "write_zeroes": true, 00:38:48.112 "zcopy": false, 00:38:48.112 "get_zone_info": false, 00:38:48.112 "zone_management": false, 00:38:48.112 "zone_append": false, 00:38:48.112 "compare": false, 00:38:48.112 "compare_and_write": false, 00:38:48.112 "abort": false, 00:38:48.112 "seek_hole": false, 00:38:48.112 "seek_data": false, 00:38:48.112 "copy": false, 00:38:48.112 "nvme_iov_md": false 00:38:48.112 }, 00:38:48.112 "driver_specific": { 00:38:48.112 "ftl": { 00:38:48.112 "base_bdev": "09c83529-f0ea-494e-8e55-d73a0ed720c6", 00:38:48.112 "cache": "nvc0n1p0" 00:38:48.112 } 00:38:48.112 } 00:38:48.112 } 00:38:48.112 ]' 00:38:48.112 07:05:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:38:48.112 07:05:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:38:48.112 07:05:20 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:48.371 [2024-12-06 07:05:20.885272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.885349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:48.371 [2024-12-06 07:05:20.885372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:48.371 [2024-12-06 07:05:20.885389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.885433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:48.371 [2024-12-06 07:05:20.888787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.888848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:48.371 [2024-12-06 07:05:20.888871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.329 ms 00:38:48.371 [2024-12-06 07:05:20.888883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.889508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.889543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:48.371 [2024-12-06 07:05:20.889562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:38:48.371 [2024-12-06 07:05:20.889574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.893111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.893141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:48.371 [2024-12-06 07:05:20.893191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:38:48.371 [2024-12-06 07:05:20.893202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.900034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.900067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:48.371 [2024-12-06 07:05:20.900102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:38:48.371 [2024-12-06 07:05:20.900114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.926918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.926958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:48.371 [2024-12-06 07:05:20.926995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.657 ms 00:38:48.371 [2024-12-06 07:05:20.927006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.943989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.944187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:48.371 [2024-12-06 07:05:20.944264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.888 ms 00:38:48.371 [2024-12-06 07:05:20.944281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.371 [2024-12-06 07:05:20.944529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.371 [2024-12-06 07:05:20.944593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:48.371 [2024-12-06 07:05:20.944626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:38:48.371 [2024-12-06 07:05:20.944638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.630 [2024-12-06 07:05:20.972774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.630 [2024-12-06 07:05:20.972813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:48.630 [2024-12-06 07:05:20.972849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.089 ms 00:38:48.630 [2024-12-06 07:05:20.972860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.630 [2024-12-06 07:05:20.999296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.630 [2024-12-06 07:05:20.999333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:48.630 [2024-12-06 07:05:20.999370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.339 ms 00:38:48.630 [2024-12-06 07:05:20.999381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.630 [2024-12-06 07:05:21.025444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.630 [2024-12-06 07:05:21.025482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:48.630 [2024-12-06 07:05:21.025517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.971 ms 00:38:48.630 [2024-12-06 07:05:21.025528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.630 [2024-12-06 07:05:21.055164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.630 [2024-12-06 07:05:21.055219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:48.630 [2024-12-06 07:05:21.055254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.485 ms 00:38:48.630 [2024-12-06 07:05:21.055265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.630 [2024-12-06 07:05:21.055361] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:48.630 [2024-12-06 07:05:21.055384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:48.630 [2024-12-06 07:05:21.055798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.055999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:48.631 [2024-12-06 07:05:21.056857] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:48.631 [2024-12-06 07:05:21.056875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:38:48.631 [2024-12-06 07:05:21.056887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:48.631 [2024-12-06 07:05:21.056900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:48.631 [2024-12-06 07:05:21.056911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:48.631 [2024-12-06 07:05:21.056927] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:48.631 [2024-12-06 07:05:21.056937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:48.631 [2024-12-06 07:05:21.056950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:48.631 [2024-12-06 07:05:21.056961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:48.631 [2024-12-06 07:05:21.056973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:48.631 [2024-12-06 07:05:21.056983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:48.631 [2024-12-06 07:05:21.056996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.631 [2024-12-06 07:05:21.057007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:48.631 [2024-12-06 07:05:21.057021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:38:48.631 [2024-12-06 07:05:21.057033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.631 [2024-12-06 07:05:21.071819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.631 [2024-12-06 07:05:21.071873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:48.631 [2024-12-06 07:05:21.071910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.745 ms 00:38:48.631 [2024-12-06 07:05:21.071922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.072437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.632 [2024-12-06 07:05:21.072502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:48.632 [2024-12-06 07:05:21.072535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:38:48.632 [2024-12-06 07:05:21.072547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.121180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.632 [2024-12-06 07:05:21.121239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:48.632 [2024-12-06 07:05:21.121274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.632 [2024-12-06 07:05:21.121286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.121413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.632 [2024-12-06 07:05:21.121432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:48.632 [2024-12-06 07:05:21.121445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.632 [2024-12-06 07:05:21.121456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.121590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.632 [2024-12-06 07:05:21.121610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:48.632 [2024-12-06 07:05:21.121631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.632 [2024-12-06 07:05:21.121643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.121684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.632 [2024-12-06 07:05:21.121732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:48.632 [2024-12-06 07:05:21.121751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.632 [2024-12-06 07:05:21.121763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.632 [2024-12-06 07:05:21.211405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.632 [2024-12-06 07:05:21.211489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:48.632 [2024-12-06 07:05:21.211527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.632 [2024-12-06 07:05:21.211538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.890 [2024-12-06 07:05:21.283502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.890 [2024-12-06 07:05:21.283572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:48.890 [2024-12-06 07:05:21.283609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.890 [2024-12-06 07:05:21.283621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.890 [2024-12-06 07:05:21.283813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.890 [2024-12-06 07:05:21.283834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:48.890 [2024-12-06 07:05:21.283851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.890 [2024-12-06 07:05:21.283881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.890 [2024-12-06 07:05:21.283975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.890 [2024-12-06 07:05:21.283990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:48.890 [2024-12-06 07:05:21.284005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.890 [2024-12-06 07:05:21.284017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.890 [2024-12-06 07:05:21.284176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.890 [2024-12-06 07:05:21.284205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:48.890 [2024-12-06 07:05:21.284223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.890 [2024-12-06 07:05:21.284263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.890 [2024-12-06 07:05:21.284353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.890 [2024-12-06 07:05:21.284404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:48.890 [2024-12-06 07:05:21.284421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.890 [2024-12-06 07:05:21.284434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.891 [2024-12-06 07:05:21.284502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.891 [2024-12-06 07:05:21.284525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:48.891 [2024-12-06 07:05:21.284558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.891 [2024-12-06 07:05:21.284570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.891 [2024-12-06 07:05:21.284656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.891 [2024-12-06 07:05:21.284680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:48.891 [2024-12-06 07:05:21.284696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.891 [2024-12-06 07:05:21.284723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.891 [2024-12-06 07:05:21.284955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 399.664 ms, result 0 00:38:48.891 true 00:38:48.891 07:05:21 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 77913 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77913 ']' 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77913 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77913 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:48.891 killing process with pid 77913 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77913' 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77913 00:38:48.891 07:05:21 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77913 00:38:54.157 07:05:25 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:38:54.157 65536+0 records in 00:38:54.157 65536+0 records out 00:38:54.157 268435456 bytes (268 MB, 256 MiB) copied, 0.97187 s, 276 MB/s 00:38:54.157 07:05:26 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:54.415 [2024-12-06 07:05:26.807419] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:38:54.415 [2024-12-06 07:05:26.807555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78106 ] 00:38:54.415 [2024-12-06 07:05:26.965694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.673 [2024-12-06 07:05:27.045502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.933 [2024-12-06 07:05:27.329618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:54.933 [2024-12-06 07:05:27.329770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:54.933 [2024-12-06 07:05:27.489589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.489637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:54.933 [2024-12-06 07:05:27.489670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:54.933 [2024-12-06 07:05:27.489681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.492748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.492812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:54.933 [2024-12-06 07:05:27.492843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:38:54.933 [2024-12-06 07:05:27.492853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.492978] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:54.933 [2024-12-06 07:05:27.493984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:54.933 [2024-12-06 07:05:27.494022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.494051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:54.933 [2024-12-06 07:05:27.494063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:38:54.933 [2024-12-06 07:05:27.494073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.495412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:54.933 [2024-12-06 07:05:27.508652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.508688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:54.933 [2024-12-06 07:05:27.508727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.241 ms 00:38:54.933 [2024-12-06 07:05:27.508740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.508846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.508865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:54.933 [2024-12-06 07:05:27.508876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:38:54.933 [2024-12-06 07:05:27.508885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.513091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.513126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:54.933 [2024-12-06 07:05:27.513155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:38:54.933 [2024-12-06 07:05:27.513164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.513268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.513286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:54.933 [2024-12-06 07:05:27.513298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:38:54.933 [2024-12-06 07:05:27.513307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.513344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.513358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:54.933 [2024-12-06 07:05:27.513384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:54.933 [2024-12-06 07:05:27.513420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.513453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:54.933 [2024-12-06 07:05:27.516984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.517017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:54.933 [2024-12-06 07:05:27.517047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.539 ms 00:38:54.933 [2024-12-06 07:05:27.517057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.517103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.517118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:54.933 [2024-12-06 07:05:27.517129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:54.933 [2024-12-06 07:05:27.517138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.517180] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:54.933 [2024-12-06 07:05:27.517254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:54.933 [2024-12-06 07:05:27.517299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:54.933 [2024-12-06 07:05:27.517320] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:54.933 [2024-12-06 07:05:27.517438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:54.933 [2024-12-06 07:05:27.517454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:54.933 [2024-12-06 07:05:27.517467] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:54.933 [2024-12-06 07:05:27.517485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:54.933 [2024-12-06 07:05:27.517498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:54.933 [2024-12-06 07:05:27.517509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:54.933 [2024-12-06 07:05:27.517519] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:54.933 [2024-12-06 07:05:27.517529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:54.933 [2024-12-06 07:05:27.517539] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:54.933 [2024-12-06 07:05:27.517550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.517564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:54.933 [2024-12-06 07:05:27.517575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:38:54.933 [2024-12-06 07:05:27.517586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.517716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.933 [2024-12-06 07:05:27.517739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:54.933 [2024-12-06 07:05:27.517752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:38:54.933 [2024-12-06 07:05:27.517789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.933 [2024-12-06 07:05:27.517935] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:54.933 [2024-12-06 07:05:27.517964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:54.933 [2024-12-06 07:05:27.517982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.933 [2024-12-06 07:05:27.517997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:54.933 [2024-12-06 07:05:27.518044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:54.933 [2024-12-06 07:05:27.518077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:54.933 [2024-12-06 07:05:27.518089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.933 [2024-12-06 07:05:27.518108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:54.933 [2024-12-06 07:05:27.518132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:54.933 [2024-12-06 07:05:27.518141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.933 [2024-12-06 07:05:27.518150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:54.933 [2024-12-06 07:05:27.518175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:54.933 [2024-12-06 07:05:27.518186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:54.933 [2024-12-06 07:05:27.518206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:54.933 [2024-12-06 07:05:27.518215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:54.933 [2024-12-06 07:05:27.518234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:54.933 [2024-12-06 07:05:27.518243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.933 [2024-12-06 07:05:27.518252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:54.933 [2024-12-06 07:05:27.518261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.934 [2024-12-06 07:05:27.518279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:54.934 [2024-12-06 07:05:27.518289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.934 [2024-12-06 07:05:27.518307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:54.934 [2024-12-06 07:05:27.518316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.934 [2024-12-06 07:05:27.518334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:54.934 [2024-12-06 07:05:27.518343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.934 [2024-12-06 07:05:27.518361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:54.934 [2024-12-06 07:05:27.518371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:54.934 [2024-12-06 07:05:27.518380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.934 [2024-12-06 07:05:27.518389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:54.934 [2024-12-06 07:05:27.518398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:54.934 [2024-12-06 07:05:27.518407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:54.934 [2024-12-06 07:05:27.518425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:54.934 [2024-12-06 07:05:27.518434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518443] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:54.934 [2024-12-06 07:05:27.518453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:54.934 [2024-12-06 07:05:27.518468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.934 [2024-12-06 07:05:27.518478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.934 [2024-12-06 07:05:27.518488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:54.934 [2024-12-06 07:05:27.518499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:54.934 [2024-12-06 07:05:27.518508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:54.934 [2024-12-06 07:05:27.518518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:54.934 [2024-12-06 07:05:27.518527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:54.934 [2024-12-06 07:05:27.518536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:54.934 [2024-12-06 07:05:27.518547] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:54.934 [2024-12-06 07:05:27.518561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:54.934 [2024-12-06 07:05:27.518582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:54.934 [2024-12-06 07:05:27.518593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:54.934 [2024-12-06 07:05:27.518603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:54.934 [2024-12-06 07:05:27.518612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:54.934 [2024-12-06 07:05:27.518622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:54.934 [2024-12-06 07:05:27.518632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:54.934 [2024-12-06 07:05:27.518656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:54.934 [2024-12-06 07:05:27.518666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:54.934 [2024-12-06 07:05:27.518676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:54.934 [2024-12-06 07:05:27.518725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:54.934 [2024-12-06 07:05:27.518736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:54.934 [2024-12-06 07:05:27.518775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:54.934 [2024-12-06 07:05:27.518786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:54.934 [2024-12-06 07:05:27.518796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:54.934 [2024-12-06 07:05:27.518808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.934 [2024-12-06 07:05:27.518823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:54.934 [2024-12-06 07:05:27.518834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:38:54.934 [2024-12-06 07:05:27.518844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.546825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.546895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:55.193 [2024-12-06 07:05:27.546929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.856 ms 00:38:55.193 [2024-12-06 07:05:27.546939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.547124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.547174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:55.193 [2024-12-06 07:05:27.547187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:38:55.193 [2024-12-06 07:05:27.547196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.589455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.589502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:55.193 [2024-12-06 07:05:27.589538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.229 ms 00:38:55.193 [2024-12-06 07:05:27.589549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.589677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.589695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:55.193 [2024-12-06 07:05:27.589707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:55.193 [2024-12-06 07:05:27.589716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.590078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.590104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:55.193 [2024-12-06 07:05:27.590124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:38:55.193 [2024-12-06 07:05:27.590135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.590275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.590293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:55.193 [2024-12-06 07:05:27.590304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:38:55.193 [2024-12-06 07:05:27.590314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.604365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.604419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:55.193 [2024-12-06 07:05:27.604451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.025 ms 00:38:55.193 [2024-12-06 07:05:27.604462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.617633] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:38:55.193 [2024-12-06 07:05:27.617687] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:55.193 [2024-12-06 07:05:27.617736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.617750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:55.193 [2024-12-06 07:05:27.617761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.119 ms 00:38:55.193 [2024-12-06 07:05:27.617771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.641013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.641049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:55.193 [2024-12-06 07:05:27.641080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.155 ms 00:38:55.193 [2024-12-06 07:05:27.641090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.653560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.653596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:55.193 [2024-12-06 07:05:27.653626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.387 ms 00:38:55.193 [2024-12-06 07:05:27.653635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.193 [2024-12-06 07:05:27.665873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.193 [2024-12-06 07:05:27.665907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:55.194 [2024-12-06 07:05:27.665936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.116 ms 00:38:55.194 [2024-12-06 07:05:27.665945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.666667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.666742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:55.194 [2024-12-06 07:05:27.666773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:38:55.194 [2024-12-06 07:05:27.666783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.724605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.724747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:55.194 [2024-12-06 07:05:27.724768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.791 ms 00:38:55.194 [2024-12-06 07:05:27.724778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.734753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:55.194 [2024-12-06 07:05:27.746186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.746237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:55.194 [2024-12-06 07:05:27.746270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.258 ms 00:38:55.194 [2024-12-06 07:05:27.746281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.746422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.746440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:55.194 [2024-12-06 07:05:27.746453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:55.194 [2024-12-06 07:05:27.746462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.746552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.746567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:55.194 [2024-12-06 07:05:27.746578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:38:55.194 [2024-12-06 07:05:27.746588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.746644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.746667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:55.194 [2024-12-06 07:05:27.746678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:38:55.194 [2024-12-06 07:05:27.746688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.746796] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:55.194 [2024-12-06 07:05:27.746821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.746832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:55.194 [2024-12-06 07:05:27.746843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:38:55.194 [2024-12-06 07:05:27.746853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.771549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.771588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:55.194 [2024-12-06 07:05:27.771619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.671 ms 00:38:55.194 [2024-12-06 07:05:27.771630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.771740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.194 [2024-12-06 07:05:27.771759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:55.194 [2024-12-06 07:05:27.771770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:55.194 [2024-12-06 07:05:27.771779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.194 [2024-12-06 07:05:27.773060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:55.194 [2024-12-06 07:05:27.776432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.043 ms, result 0 00:38:55.194 [2024-12-06 07:05:27.777238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:55.452 [2024-12-06 07:05:27.792379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:56.388  [2024-12-06T07:05:29.914Z] Copying: 21/256 [MB] (21 MBps) [2024-12-06T07:05:30.849Z] Copying: 44/256 [MB] (22 MBps) [2024-12-06T07:05:32.228Z] Copying: 66/256 [MB] (22 MBps) [2024-12-06T07:05:33.164Z] Copying: 89/256 [MB] (22 MBps) [2024-12-06T07:05:34.100Z] Copying: 111/256 [MB] (22 MBps) [2024-12-06T07:05:35.036Z] Copying: 133/256 [MB] (22 MBps) [2024-12-06T07:05:35.984Z] Copying: 155/256 [MB] (22 MBps) [2024-12-06T07:05:36.919Z] Copying: 178/256 [MB] (22 MBps) [2024-12-06T07:05:37.883Z] Copying: 200/256 [MB] (22 MBps) [2024-12-06T07:05:38.819Z] Copying: 222/256 [MB] (22 MBps) [2024-12-06T07:05:39.386Z] Copying: 244/256 [MB] (22 MBps) [2024-12-06T07:05:39.386Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-06 07:05:39.300750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:06.795 [2024-12-06 07:05:39.310520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.310557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:06.795 [2024-12-06 07:05:39.310590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:06.795 [2024-12-06 07:05:39.310607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.310634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:06.795 [2024-12-06 07:05:39.313334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.313362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:06.795 [2024-12-06 07:05:39.313391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:39:06.795 [2024-12-06 07:05:39.313400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.315136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.315171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:06.795 [2024-12-06 07:05:39.315200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:39:06.795 [2024-12-06 07:05:39.315210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.321923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.321978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:06.795 [2024-12-06 07:05:39.322008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:39:06.795 [2024-12-06 07:05:39.322018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.327836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.327867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:06.795 [2024-12-06 07:05:39.327895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.764 ms 00:39:06.795 [2024-12-06 07:05:39.327904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.352333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.352372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:06.795 [2024-12-06 07:05:39.352387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.366 ms 00:39:06.795 [2024-12-06 07:05:39.352397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.366923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.366965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:06.795 [2024-12-06 07:05:39.367000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.483 ms 00:39:06.795 [2024-12-06 07:05:39.367011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.795 [2024-12-06 07:05:39.367146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.795 [2024-12-06 07:05:39.367165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:06.795 [2024-12-06 07:05:39.367176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:06.795 [2024-12-06 07:05:39.367210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.056 [2024-12-06 07:05:39.393213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.056 [2024-12-06 07:05:39.393248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:07.056 [2024-12-06 07:05:39.393277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.950 ms 00:39:07.056 [2024-12-06 07:05:39.393287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.056 [2024-12-06 07:05:39.417764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.056 [2024-12-06 07:05:39.417798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:07.056 [2024-12-06 07:05:39.417828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:39:07.056 [2024-12-06 07:05:39.417837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.056 [2024-12-06 07:05:39.441582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.056 [2024-12-06 07:05:39.441616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:07.056 [2024-12-06 07:05:39.441645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.704 ms 00:39:07.056 [2024-12-06 07:05:39.441655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.056 [2024-12-06 07:05:39.465753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.056 [2024-12-06 07:05:39.465787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:07.056 [2024-12-06 07:05:39.465818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.015 ms 00:39:07.056 [2024-12-06 07:05:39.465827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.056 [2024-12-06 07:05:39.465869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:07.056 [2024-12-06 07:05:39.465889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.465997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:07.056 [2024-12-06 07:05:39.466340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:07.057 [2024-12-06 07:05:39.466997] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:07.057 [2024-12-06 07:05:39.467007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:07.057 [2024-12-06 07:05:39.467018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:07.057 [2024-12-06 07:05:39.467027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:07.057 [2024-12-06 07:05:39.467037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:07.057 [2024-12-06 07:05:39.467047] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:07.057 [2024-12-06 07:05:39.467056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:07.057 [2024-12-06 07:05:39.467067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:07.057 [2024-12-06 07:05:39.467077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:07.057 [2024-12-06 07:05:39.467087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:07.057 [2024-12-06 07:05:39.467096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:07.057 [2024-12-06 07:05:39.467106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.057 [2024-12-06 07:05:39.467121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:07.057 [2024-12-06 07:05:39.467132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:39:07.057 [2024-12-06 07:05:39.467142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.480277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.057 [2024-12-06 07:05:39.480310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:07.057 [2024-12-06 07:05:39.480339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.096 ms 00:39:07.057 [2024-12-06 07:05:39.480349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.480812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.057 [2024-12-06 07:05:39.480842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:07.057 [2024-12-06 07:05:39.480855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:39:07.057 [2024-12-06 07:05:39.480865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.516406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.057 [2024-12-06 07:05:39.516461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:07.057 [2024-12-06 07:05:39.516476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.057 [2024-12-06 07:05:39.516486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.516594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.057 [2024-12-06 07:05:39.516610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:07.057 [2024-12-06 07:05:39.516621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.057 [2024-12-06 07:05:39.516630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.516718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.057 [2024-12-06 07:05:39.516770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:07.057 [2024-12-06 07:05:39.516784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.057 [2024-12-06 07:05:39.516795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.057 [2024-12-06 07:05:39.516819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.057 [2024-12-06 07:05:39.516846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:07.057 [2024-12-06 07:05:39.516857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.058 [2024-12-06 07:05:39.516866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.058 [2024-12-06 07:05:39.595272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.058 [2024-12-06 07:05:39.595329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:07.058 [2024-12-06 07:05:39.595361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.058 [2024-12-06 07:05:39.595371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.661605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.661653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:07.317 [2024-12-06 07:05:39.661685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.661695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.661773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.661791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:07.317 [2024-12-06 07:05:39.661801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.661811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.661841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.661852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:07.317 [2024-12-06 07:05:39.661869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.661878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.662033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.662051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:07.317 [2024-12-06 07:05:39.662062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.662072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.662120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.662137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:07.317 [2024-12-06 07:05:39.662148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.662165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.662209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.662230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:07.317 [2024-12-06 07:05:39.662241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.662252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.662303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.317 [2024-12-06 07:05:39.662321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:07.317 [2024-12-06 07:05:39.662345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.317 [2024-12-06 07:05:39.662355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.317 [2024-12-06 07:05:39.662508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.975 ms, result 0 00:39:08.256 00:39:08.256 00:39:08.256 07:05:40 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78250 00:39:08.256 07:05:40 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:39:08.256 07:05:40 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78250 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78250 ']' 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.256 07:05:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:08.256 [2024-12-06 07:05:40.642308] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:08.256 [2024-12-06 07:05:40.642485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78250 ] 00:39:08.256 [2024-12-06 07:05:40.818667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.514 [2024-12-06 07:05:40.898342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.080 07:05:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.080 07:05:41 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:39:09.080 07:05:41 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:39:09.338 [2024-12-06 07:05:41.776233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:09.338 [2024-12-06 07:05:41.776333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:09.598 [2024-12-06 07:05:41.952680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.952733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:09.598 [2024-12-06 07:05:41.952770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:09.598 [2024-12-06 07:05:41.952781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.955869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.955906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:09.598 [2024-12-06 07:05:41.955937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:39:09.598 [2024-12-06 07:05:41.955947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.956062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:09.598 [2024-12-06 07:05:41.956970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:09.598 [2024-12-06 07:05:41.957024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.957036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:09.598 [2024-12-06 07:05:41.957049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:39:09.598 [2024-12-06 07:05:41.957059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.958131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:09.598 [2024-12-06 07:05:41.971298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.971371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:09.598 [2024-12-06 07:05:41.971389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:39:09.598 [2024-12-06 07:05:41.971405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.971519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.971545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:09.598 [2024-12-06 07:05:41.971558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:39:09.598 [2024-12-06 07:05:41.971572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.975697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.975774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:09.598 [2024-12-06 07:05:41.975790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.030 ms 00:39:09.598 [2024-12-06 07:05:41.975805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.975952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.975979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:09.598 [2024-12-06 07:05:41.976008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:39:09.598 [2024-12-06 07:05:41.976046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.976083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.976103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:09.598 [2024-12-06 07:05:41.976116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:09.598 [2024-12-06 07:05:41.976132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.976164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:09.598 [2024-12-06 07:05:41.979758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.979787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:09.598 [2024-12-06 07:05:41.979822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:39:09.598 [2024-12-06 07:05:41.979834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.979906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.979923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:09.598 [2024-12-06 07:05:41.979940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:09.598 [2024-12-06 07:05:41.979955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.980002] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:09.598 [2024-12-06 07:05:41.980033] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:09.598 [2024-12-06 07:05:41.980088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:09.598 [2024-12-06 07:05:41.980110] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:09.598 [2024-12-06 07:05:41.980216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:09.598 [2024-12-06 07:05:41.980232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:09.598 [2024-12-06 07:05:41.980283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:09.598 [2024-12-06 07:05:41.980298] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:09.598 [2024-12-06 07:05:41.980315] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:09.598 [2024-12-06 07:05:41.980328] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:09.598 [2024-12-06 07:05:41.980343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:09.598 [2024-12-06 07:05:41.980354] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:09.598 [2024-12-06 07:05:41.980373] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:09.598 [2024-12-06 07:05:41.980386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.980401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:09.598 [2024-12-06 07:05:41.980414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:39:09.598 [2024-12-06 07:05:41.980429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.598 [2024-12-06 07:05:41.980523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.598 [2024-12-06 07:05:41.980543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:09.599 [2024-12-06 07:05:41.980555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:09.599 [2024-12-06 07:05:41.980570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:41.980699] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:09.599 [2024-12-06 07:05:41.980733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:09.599 [2024-12-06 07:05:41.980764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:09.599 [2024-12-06 07:05:41.980783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.980796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:09.599 [2024-12-06 07:05:41.980812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.980823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:09.599 [2024-12-06 07:05:41.980842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:09.599 [2024-12-06 07:05:41.980854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:09.599 [2024-12-06 07:05:41.980869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:09.599 [2024-12-06 07:05:41.980880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:09.599 [2024-12-06 07:05:41.980894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:09.599 [2024-12-06 07:05:41.980905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:09.599 [2024-12-06 07:05:41.980920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:09.599 [2024-12-06 07:05:41.980931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:09.599 [2024-12-06 07:05:41.980946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.980957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:09.599 [2024-12-06 07:05:41.980971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:09.599 [2024-12-06 07:05:41.980994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:09.599 [2024-12-06 07:05:41.981021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:09.599 [2024-12-06 07:05:41.981065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:09.599 [2024-12-06 07:05:41.981102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:09.599 [2024-12-06 07:05:41.981158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:09.599 [2024-12-06 07:05:41.981193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:09.599 [2024-12-06 07:05:41.981217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:09.599 [2024-12-06 07:05:41.981231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:09.599 [2024-12-06 07:05:41.981241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:09.599 [2024-12-06 07:05:41.981255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:09.599 [2024-12-06 07:05:41.981266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:09.599 [2024-12-06 07:05:41.981283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:09.599 [2024-12-06 07:05:41.981309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:09.599 [2024-12-06 07:05:41.981319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:09.599 [2024-12-06 07:05:41.981349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:09.599 [2024-12-06 07:05:41.981365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:09.599 [2024-12-06 07:05:41.981391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:09.599 [2024-12-06 07:05:41.981402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:09.599 [2024-12-06 07:05:41.981416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:09.599 [2024-12-06 07:05:41.981427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:09.599 [2024-12-06 07:05:41.981441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:09.599 [2024-12-06 07:05:41.981452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:09.599 [2024-12-06 07:05:41.981468] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:09.599 [2024-12-06 07:05:41.981483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:09.599 [2024-12-06 07:05:41.981514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:09.599 [2024-12-06 07:05:41.981529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:09.599 [2024-12-06 07:05:41.981540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:09.599 [2024-12-06 07:05:41.981554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:09.599 [2024-12-06 07:05:41.981565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:09.599 [2024-12-06 07:05:41.981580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:09.599 [2024-12-06 07:05:41.981591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:09.599 [2024-12-06 07:05:41.981605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:09.599 [2024-12-06 07:05:41.981616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:09.599 [2024-12-06 07:05:41.981684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:09.599 [2024-12-06 07:05:41.981696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:09.599 [2024-12-06 07:05:41.981739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:09.599 [2024-12-06 07:05:41.981753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:09.599 [2024-12-06 07:05:41.981765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:09.599 [2024-12-06 07:05:41.981781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:41.981793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:09.599 [2024-12-06 07:05:41.981808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:39:09.599 [2024-12-06 07:05:41.981826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:42.010420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:42.010468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:09.599 [2024-12-06 07:05:42.010507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.498 ms 00:39:09.599 [2024-12-06 07:05:42.010524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:42.010685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:42.010716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:09.599 [2024-12-06 07:05:42.010752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:39:09.599 [2024-12-06 07:05:42.010764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:42.043968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:42.044015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:09.599 [2024-12-06 07:05:42.044052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.156 ms 00:39:09.599 [2024-12-06 07:05:42.044064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:42.044176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:42.044209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:09.599 [2024-12-06 07:05:42.044267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:09.599 [2024-12-06 07:05:42.044296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.599 [2024-12-06 07:05:42.044633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.599 [2024-12-06 07:05:42.044665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:09.599 [2024-12-06 07:05:42.044684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:39:09.600 [2024-12-06 07:05:42.044710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.044898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.044920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:09.600 [2024-12-06 07:05:42.044939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:39:09.600 [2024-12-06 07:05:42.044950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.060940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.060975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:09.600 [2024-12-06 07:05:42.061011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.957 ms 00:39:09.600 [2024-12-06 07:05:42.061023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.092876] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:09.600 [2024-12-06 07:05:42.092913] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:09.600 [2024-12-06 07:05:42.092948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.092959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:09.600 [2024-12-06 07:05:42.092977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.794 ms 00:39:09.600 [2024-12-06 07:05:42.093001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.116171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.116206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:09.600 [2024-12-06 07:05:42.116248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.076 ms 00:39:09.600 [2024-12-06 07:05:42.116261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.128647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.128681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:09.600 [2024-12-06 07:05:42.128729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.290 ms 00:39:09.600 [2024-12-06 07:05:42.128742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.140927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.140961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:09.600 [2024-12-06 07:05:42.140997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:39:09.600 [2024-12-06 07:05:42.141008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.600 [2024-12-06 07:05:42.141840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.600 [2024-12-06 07:05:42.141874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:09.600 [2024-12-06 07:05:42.141895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:39:09.600 [2024-12-06 07:05:42.141906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.859 [2024-12-06 07:05:42.204351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.204431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:09.860 [2024-12-06 07:05:42.204472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.406 ms 00:39:09.860 [2024-12-06 07:05:42.204484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.214609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:09.860 [2024-12-06 07:05:42.226764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.226859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:09.860 [2024-12-06 07:05:42.226883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.125 ms 00:39:09.860 [2024-12-06 07:05:42.226899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.227026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.227052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:09.860 [2024-12-06 07:05:42.227066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:39:09.860 [2024-12-06 07:05:42.227081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.227188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.227211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:09.860 [2024-12-06 07:05:42.227224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:39:09.860 [2024-12-06 07:05:42.227246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.227277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.227293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:09.860 [2024-12-06 07:05:42.227304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:09.860 [2024-12-06 07:05:42.227319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.227357] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:09.860 [2024-12-06 07:05:42.227376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.227389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:09.860 [2024-12-06 07:05:42.227402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:39:09.860 [2024-12-06 07:05:42.227413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.256752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.256814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:09.860 [2024-12-06 07:05:42.256850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.306 ms 00:39:09.860 [2024-12-06 07:05:42.256862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.257007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.860 [2024-12-06 07:05:42.257044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:09.860 [2024-12-06 07:05:42.257089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:39:09.860 [2024-12-06 07:05:42.257118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.860 [2024-12-06 07:05:42.258398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:09.860 [2024-12-06 07:05:42.262238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.300 ms, result 0 00:39:09.860 [2024-12-06 07:05:42.263574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:09.860 Some configs were skipped because the RPC state that can call them passed over. 00:39:09.860 07:05:42 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:39:10.130 [2024-12-06 07:05:42.515842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.130 [2024-12-06 07:05:42.515919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:10.130 [2024-12-06 07:05:42.515939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:39:10.130 [2024-12-06 07:05:42.515970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.130 [2024-12-06 07:05:42.516035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.749 ms, result 0 00:39:10.130 true 00:39:10.130 07:05:42 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:39:10.438 [2024-12-06 07:05:42.728168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.438 [2024-12-06 07:05:42.728219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:10.438 [2024-12-06 07:05:42.728256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:39:10.438 [2024-12-06 07:05:42.728286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.438 [2024-12-06 07:05:42.728381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.755 ms, result 0 00:39:10.438 true 00:39:10.438 07:05:42 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78250 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78250 ']' 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78250 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78250 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:10.438 killing process with pid 78250 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78250' 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78250 00:39:10.438 07:05:42 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78250 00:39:11.021 [2024-12-06 07:05:43.525275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.525366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:11.021 [2024-12-06 07:05:43.525400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:11.021 [2024-12-06 07:05:43.525412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.525442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:11.021 [2024-12-06 07:05:43.528575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.528645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:11.021 [2024-12-06 07:05:43.528690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:39:11.021 [2024-12-06 07:05:43.528701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.529004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.529031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:11.021 [2024-12-06 07:05:43.529048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:39:11.021 [2024-12-06 07:05:43.529060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.532850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.532906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:11.021 [2024-12-06 07:05:43.532943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:39:11.021 [2024-12-06 07:05:43.532969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.539455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.539498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:11.021 [2024-12-06 07:05:43.539530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.440 ms 00:39:11.021 [2024-12-06 07:05:43.539540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.550415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.550471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:11.021 [2024-12-06 07:05:43.550505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.804 ms 00:39:11.021 [2024-12-06 07:05:43.550516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.558521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.558572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:11.021 [2024-12-06 07:05:43.558604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.943 ms 00:39:11.021 [2024-12-06 07:05:43.558614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.558766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.558786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:11.021 [2024-12-06 07:05:43.558800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:39:11.021 [2024-12-06 07:05:43.558810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.569918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.569966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:11.021 [2024-12-06 07:05:43.570002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.062 ms 00:39:11.021 [2024-12-06 07:05:43.570013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.580652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.580700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:11.021 [2024-12-06 07:05:43.580752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.591 ms 00:39:11.021 [2024-12-06 07:05:43.580765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.590963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.591011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:11.021 [2024-12-06 07:05:43.591046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.150 ms 00:39:11.021 [2024-12-06 07:05:43.591057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.601687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.021 [2024-12-06 07:05:43.601742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:11.021 [2024-12-06 07:05:43.601778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.553 ms 00:39:11.021 [2024-12-06 07:05:43.601789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.021 [2024-12-06 07:05:43.601835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:11.021 [2024-12-06 07:05:43.601855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.601982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:11.021 [2024-12-06 07:05:43.602300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.602999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:11.022 [2024-12-06 07:05:43.603350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:11.022 [2024-12-06 07:05:43.603378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:11.022 [2024-12-06 07:05:43.603411] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:11.022 [2024-12-06 07:05:43.603426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:11.022 [2024-12-06 07:05:43.603437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:11.022 [2024-12-06 07:05:43.603453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:11.022 [2024-12-06 07:05:43.603479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:11.022 [2024-12-06 07:05:43.603496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:11.022 [2024-12-06 07:05:43.603508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:11.022 [2024-12-06 07:05:43.603523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:11.022 [2024-12-06 07:05:43.603534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:11.022 [2024-12-06 07:05:43.603549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.022 [2024-12-06 07:05:43.603562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:11.022 [2024-12-06 07:05:43.603579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:39:11.022 [2024-12-06 07:05:43.603591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.618313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.297 [2024-12-06 07:05:43.618348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:11.297 [2024-12-06 07:05:43.618389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:39:11.297 [2024-12-06 07:05:43.618400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.618838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.297 [2024-12-06 07:05:43.618873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:11.297 [2024-12-06 07:05:43.618898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:39:11.297 [2024-12-06 07:05:43.618910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.667328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.667522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:11.297 [2024-12-06 07:05:43.667558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.667572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.667699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.667752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:11.297 [2024-12-06 07:05:43.667779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.667791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.667862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.667881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:11.297 [2024-12-06 07:05:43.667903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.667914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.667944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.667958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:11.297 [2024-12-06 07:05:43.667975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.667991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.749839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.749889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:11.297 [2024-12-06 07:05:43.749908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.749918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.818542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.818591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:11.297 [2024-12-06 07:05:43.818613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.818629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.818775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.818795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:11.297 [2024-12-06 07:05:43.818817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.818828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.818867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.818881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:11.297 [2024-12-06 07:05:43.818896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.818906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.819065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.819085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:11.297 [2024-12-06 07:05:43.819102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.819114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.819171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.819189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:11.297 [2024-12-06 07:05:43.819206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.819217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.819272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.819287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:11.297 [2024-12-06 07:05:43.819309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.819320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.819378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.297 [2024-12-06 07:05:43.819394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:11.297 [2024-12-06 07:05:43.819411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.297 [2024-12-06 07:05:43.819422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.297 [2024-12-06 07:05:43.819583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 294.276 ms, result 0 00:39:12.235 07:05:44 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:12.235 07:05:44 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:12.235 [2024-12-06 07:05:44.652531] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:12.235 [2024-12-06 07:05:44.653005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78303 ] 00:39:12.494 [2024-12-06 07:05:44.831329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.494 [2024-12-06 07:05:44.914265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.753 [2024-12-06 07:05:45.178521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:12.753 [2024-12-06 07:05:45.178598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:12.753 [2024-12-06 07:05:45.335268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:12.753 [2024-12-06 07:05:45.335310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:12.753 [2024-12-06 07:05:45.335327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:12.753 [2024-12-06 07:05:45.335336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:12.753 [2024-12-06 07:05:45.338181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:12.753 [2024-12-06 07:05:45.338222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:12.753 [2024-12-06 07:05:45.338252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:39:12.753 [2024-12-06 07:05:45.338261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:12.753 [2024-12-06 07:05:45.338404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:12.753 [2024-12-06 07:05:45.339384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:12.753 [2024-12-06 07:05:45.339593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:12.753 [2024-12-06 07:05:45.339612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:12.753 [2024-12-06 07:05:45.339623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:39:12.753 [2024-12-06 07:05:45.339632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:12.753 [2024-12-06 07:05:45.341125] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:13.012 [2024-12-06 07:05:45.354657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.354904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:13.012 [2024-12-06 07:05:45.354933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.533 ms 00:39:13.012 [2024-12-06 07:05:45.354946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.355087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.355109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:13.012 [2024-12-06 07:05:45.355121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:39:13.012 [2024-12-06 07:05:45.355130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.359265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.359299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:13.012 [2024-12-06 07:05:45.359312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.069 ms 00:39:13.012 [2024-12-06 07:05:45.359321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.359418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.359436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:13.012 [2024-12-06 07:05:45.359446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:39:13.012 [2024-12-06 07:05:45.359455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.359490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.359502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:13.012 [2024-12-06 07:05:45.359512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:13.012 [2024-12-06 07:05:45.359520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.359546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:13.012 [2024-12-06 07:05:45.363119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.363151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:13.012 [2024-12-06 07:05:45.363164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.580 ms 00:39:13.012 [2024-12-06 07:05:45.363173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.363216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.012 [2024-12-06 07:05:45.363230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:13.012 [2024-12-06 07:05:45.363240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:13.012 [2024-12-06 07:05:45.363249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.012 [2024-12-06 07:05:45.363273] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:13.012 [2024-12-06 07:05:45.363296] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:13.012 [2024-12-06 07:05:45.363330] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:13.012 [2024-12-06 07:05:45.363346] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:13.012 [2024-12-06 07:05:45.363433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:13.012 [2024-12-06 07:05:45.363445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:13.012 [2024-12-06 07:05:45.363456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:13.012 [2024-12-06 07:05:45.363472] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:13.012 [2024-12-06 07:05:45.363482] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:13.012 [2024-12-06 07:05:45.363491] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:13.012 [2024-12-06 07:05:45.363499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:13.012 [2024-12-06 07:05:45.363507] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:13.013 [2024-12-06 07:05:45.363516] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:13.013 [2024-12-06 07:05:45.363525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.013 [2024-12-06 07:05:45.363533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:13.013 [2024-12-06 07:05:45.363542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:39:13.013 [2024-12-06 07:05:45.363551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.013 [2024-12-06 07:05:45.363645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.013 [2024-12-06 07:05:45.363665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:13.013 [2024-12-06 07:05:45.363675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:39:13.013 [2024-12-06 07:05:45.363684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.013 [2024-12-06 07:05:45.363801] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:13.013 [2024-12-06 07:05:45.363819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:13.013 [2024-12-06 07:05:45.363830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:13.013 [2024-12-06 07:05:45.363839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:13.013 [2024-12-06 07:05:45.363856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:13.013 [2024-12-06 07:05:45.363874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:13.013 [2024-12-06 07:05:45.363882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:13.013 [2024-12-06 07:05:45.363897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:13.013 [2024-12-06 07:05:45.363917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:13.013 [2024-12-06 07:05:45.363925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:13.013 [2024-12-06 07:05:45.363933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:13.013 [2024-12-06 07:05:45.363943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:13.013 [2024-12-06 07:05:45.363951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:13.013 [2024-12-06 07:05:45.363967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:13.013 [2024-12-06 07:05:45.363974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:13.013 [2024-12-06 07:05:45.363990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:13.013 [2024-12-06 07:05:45.363998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:13.013 [2024-12-06 07:05:45.364013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:13.013 [2024-12-06 07:05:45.364036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:13.013 [2024-12-06 07:05:45.364059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:13.013 [2024-12-06 07:05:45.364083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:13.013 [2024-12-06 07:05:45.364099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:13.013 [2024-12-06 07:05:45.364107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:13.013 [2024-12-06 07:05:45.364114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:13.013 [2024-12-06 07:05:45.364122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:13.013 [2024-12-06 07:05:45.364130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:13.013 [2024-12-06 07:05:45.364138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:13.013 [2024-12-06 07:05:45.364153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:13.013 [2024-12-06 07:05:45.364161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364169] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:13.013 [2024-12-06 07:05:45.364178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:13.013 [2024-12-06 07:05:45.364191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:13.013 [2024-12-06 07:05:45.364209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:13.013 [2024-12-06 07:05:45.364217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:13.013 [2024-12-06 07:05:45.364225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:13.013 [2024-12-06 07:05:45.364233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:13.013 [2024-12-06 07:05:45.364283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:13.013 [2024-12-06 07:05:45.364293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:13.013 [2024-12-06 07:05:45.364304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:13.013 [2024-12-06 07:05:45.364316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:13.013 [2024-12-06 07:05:45.364335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:13.013 [2024-12-06 07:05:45.364345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:13.013 [2024-12-06 07:05:45.364354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:13.013 [2024-12-06 07:05:45.364363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:13.013 [2024-12-06 07:05:45.364373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:13.013 [2024-12-06 07:05:45.364382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:13.013 [2024-12-06 07:05:45.364392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:13.013 [2024-12-06 07:05:45.364401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:13.013 [2024-12-06 07:05:45.364411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:13.013 [2024-12-06 07:05:45.364459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:13.013 [2024-12-06 07:05:45.364469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:13.013 [2024-12-06 07:05:45.364490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:13.013 [2024-12-06 07:05:45.364500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:13.013 [2024-12-06 07:05:45.364510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:13.013 [2024-12-06 07:05:45.364520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.013 [2024-12-06 07:05:45.364534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:13.014 [2024-12-06 07:05:45.364544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:39:13.014 [2024-12-06 07:05:45.364553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.391106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.391156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:13.014 [2024-12-06 07:05:45.391173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.443 ms 00:39:13.014 [2024-12-06 07:05:45.391183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.391337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.391354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:13.014 [2024-12-06 07:05:45.391365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:39:13.014 [2024-12-06 07:05:45.391374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.437871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.437916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:13.014 [2024-12-06 07:05:45.437953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.471 ms 00:39:13.014 [2024-12-06 07:05:45.437963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.438079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.438097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:13.014 [2024-12-06 07:05:45.438122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:13.014 [2024-12-06 07:05:45.438131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.438426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.438441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:13.014 [2024-12-06 07:05:45.438459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:39:13.014 [2024-12-06 07:05:45.438468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.438593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.438610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:13.014 [2024-12-06 07:05:45.438620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:39:13.014 [2024-12-06 07:05:45.438629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.452597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.452775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:13.014 [2024-12-06 07:05:45.452885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.944 ms 00:39:13.014 [2024-12-06 07:05:45.452930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.466346] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:13.014 [2024-12-06 07:05:45.466538] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:13.014 [2024-12-06 07:05:45.466667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.466733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:13.014 [2024-12-06 07:05:45.466776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.601 ms 00:39:13.014 [2024-12-06 07:05:45.466874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.490246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.490413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:13.014 [2024-12-06 07:05:45.490437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.249 ms 00:39:13.014 [2024-12-06 07:05:45.490448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.503132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.503168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:13.014 [2024-12-06 07:05:45.503182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.574 ms 00:39:13.014 [2024-12-06 07:05:45.503191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.515474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.515510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:13.014 [2024-12-06 07:05:45.515524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.206 ms 00:39:13.014 [2024-12-06 07:05:45.515533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.516345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.516380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:13.014 [2024-12-06 07:05:45.516410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:39:13.014 [2024-12-06 07:05:45.516421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.573722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.573788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:13.014 [2024-12-06 07:05:45.573806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.267 ms 00:39:13.014 [2024-12-06 07:05:45.573815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.583746] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:13.014 [2024-12-06 07:05:45.596356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.596462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:13.014 [2024-12-06 07:05:45.596489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.396 ms 00:39:13.014 [2024-12-06 07:05:45.596516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.596692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.596720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:13.014 [2024-12-06 07:05:45.596780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:13.014 [2024-12-06 07:05:45.596801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.596933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.596957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:13.014 [2024-12-06 07:05:45.596975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:13.014 [2024-12-06 07:05:45.596999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.597066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.597092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:13.014 [2024-12-06 07:05:45.597109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:13.014 [2024-12-06 07:05:45.597140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.014 [2024-12-06 07:05:45.597229] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:13.014 [2024-12-06 07:05:45.597268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.014 [2024-12-06 07:05:45.597284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:13.014 [2024-12-06 07:05:45.597301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:13.014 [2024-12-06 07:05:45.597316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.273 [2024-12-06 07:05:45.623067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.273 [2024-12-06 07:05:45.623105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:13.273 [2024-12-06 07:05:45.623121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:39:13.273 [2024-12-06 07:05:45.623131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.273 [2024-12-06 07:05:45.623223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.273 [2024-12-06 07:05:45.623241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:13.273 [2024-12-06 07:05:45.623251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:39:13.273 [2024-12-06 07:05:45.623260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.273 [2024-12-06 07:05:45.624446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:13.273 [2024-12-06 07:05:45.627813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.771 ms, result 0 00:39:13.273 [2024-12-06 07:05:45.628785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:13.273 [2024-12-06 07:05:45.642801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:14.209  [2024-12-06T07:05:47.739Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T07:05:48.678Z] Copying: 45/256 [MB] (21 MBps) [2024-12-06T07:05:50.058Z] Copying: 66/256 [MB] (20 MBps) [2024-12-06T07:05:50.996Z] Copying: 87/256 [MB] (21 MBps) [2024-12-06T07:05:51.932Z] Copying: 108/256 [MB] (21 MBps) [2024-12-06T07:05:52.866Z] Copying: 129/256 [MB] (20 MBps) [2024-12-06T07:05:53.803Z] Copying: 150/256 [MB] (20 MBps) [2024-12-06T07:05:54.740Z] Copying: 171/256 [MB] (21 MBps) [2024-12-06T07:05:55.676Z] Copying: 192/256 [MB] (20 MBps) [2024-12-06T07:05:57.052Z] Copying: 213/256 [MB] (20 MBps) [2024-12-06T07:05:57.992Z] Copying: 233/256 [MB] (20 MBps) [2024-12-06T07:05:57.992Z] Copying: 254/256 [MB] (20 MBps) [2024-12-06T07:05:57.992Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 07:05:57.715074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:25.401 [2024-12-06 07:05:57.725218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.725385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:25.401 [2024-12-06 07:05:57.725508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:25.401 [2024-12-06 07:05:57.725627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.725696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:25.401 [2024-12-06 07:05:57.728754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.728925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:25.401 [2024-12-06 07:05:57.728949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:39:25.401 [2024-12-06 07:05:57.728960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.729217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.729234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:25.401 [2024-12-06 07:05:57.729260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:39:25.401 [2024-12-06 07:05:57.729269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.732266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.732293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:25.401 [2024-12-06 07:05:57.732305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:39:25.401 [2024-12-06 07:05:57.732314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.738151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.738177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:25.401 [2024-12-06 07:05:57.738188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.817 ms 00:39:25.401 [2024-12-06 07:05:57.738197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.762325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.762362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:25.401 [2024-12-06 07:05:57.762376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.068 ms 00:39:25.401 [2024-12-06 07:05:57.762385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.777192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.777371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:25.401 [2024-12-06 07:05:57.777410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.782 ms 00:39:25.401 [2024-12-06 07:05:57.777422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.777584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.777603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:25.401 [2024-12-06 07:05:57.777628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:39:25.401 [2024-12-06 07:05:57.777637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.802741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.802776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:25.401 [2024-12-06 07:05:57.802790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.083 ms 00:39:25.401 [2024-12-06 07:05:57.802799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.827296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.827332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:25.401 [2024-12-06 07:05:57.827345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.473 ms 00:39:25.401 [2024-12-06 07:05:57.827353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.851416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.851452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:25.401 [2024-12-06 07:05:57.851465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.038 ms 00:39:25.401 [2024-12-06 07:05:57.851474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.875778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.401 [2024-12-06 07:05:57.875813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:25.401 [2024-12-06 07:05:57.875826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:39:25.401 [2024-12-06 07:05:57.875835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.401 [2024-12-06 07:05:57.875858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:25.401 [2024-12-06 07:05:57.875875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:25.401 [2024-12-06 07:05:57.875993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:25.402 [2024-12-06 07:05:57.876850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:25.402 [2024-12-06 07:05:57.876859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:25.402 [2024-12-06 07:05:57.876868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:25.403 [2024-12-06 07:05:57.876876] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:25.403 [2024-12-06 07:05:57.876884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:25.403 [2024-12-06 07:05:57.876893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:25.403 [2024-12-06 07:05:57.876901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:25.403 [2024-12-06 07:05:57.876909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:25.403 [2024-12-06 07:05:57.876922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:25.403 [2024-12-06 07:05:57.876930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:25.403 [2024-12-06 07:05:57.876938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:25.403 [2024-12-06 07:05:57.876946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.403 [2024-12-06 07:05:57.876955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:25.403 [2024-12-06 07:05:57.876964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:39:25.403 [2024-12-06 07:05:57.876973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.890322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.403 [2024-12-06 07:05:57.890355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:25.403 [2024-12-06 07:05:57.890369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.314 ms 00:39:25.403 [2024-12-06 07:05:57.890377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.890795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.403 [2024-12-06 07:05:57.890817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:25.403 [2024-12-06 07:05:57.890829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:39:25.403 [2024-12-06 07:05:57.890839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.926522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.403 [2024-12-06 07:05:57.926561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:25.403 [2024-12-06 07:05:57.926575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.403 [2024-12-06 07:05:57.926596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.926675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.403 [2024-12-06 07:05:57.926691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:25.403 [2024-12-06 07:05:57.926700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.403 [2024-12-06 07:05:57.926743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.926806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.403 [2024-12-06 07:05:57.926838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:25.403 [2024-12-06 07:05:57.926849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.403 [2024-12-06 07:05:57.926859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.403 [2024-12-06 07:05:57.926894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.403 [2024-12-06 07:05:57.926907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:25.403 [2024-12-06 07:05:57.926916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.403 [2024-12-06 07:05:57.926926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.006749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.006808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:25.663 [2024-12-06 07:05:58.006825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.006834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.071938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.071987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:25.663 [2024-12-06 07:05:58.072003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:25.663 [2024-12-06 07:05:58.072103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:25.663 [2024-12-06 07:05:58.072177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:25.663 [2024-12-06 07:05:58.072343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:25.663 [2024-12-06 07:05:58.072433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:25.663 [2024-12-06 07:05:58.072506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:25.663 [2024-12-06 07:05:58.072600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:25.663 [2024-12-06 07:05:58.072610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:25.663 [2024-12-06 07:05:58.072619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.663 [2024-12-06 07:05:58.072817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.584 ms, result 0 00:39:26.234 00:39:26.234 00:39:26.493 07:05:58 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:39:26.493 07:05:58 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:26.752 07:05:59 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:27.012 [2024-12-06 07:05:59.435264] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:27.012 [2024-12-06 07:05:59.435680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78452 ] 00:39:27.271 [2024-12-06 07:05:59.609422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.271 [2024-12-06 07:05:59.695716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.530 [2024-12-06 07:05:59.961599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:27.530 [2024-12-06 07:05:59.961680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:27.793 [2024-12-06 07:06:00.121523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.121592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:27.794 [2024-12-06 07:06:00.121626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:27.794 [2024-12-06 07:06:00.121653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.124859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.124897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:27.794 [2024-12-06 07:06:00.124927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:39:27.794 [2024-12-06 07:06:00.124937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.125077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:27.794 [2024-12-06 07:06:00.125929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:27.794 [2024-12-06 07:06:00.125969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.125982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:27.794 [2024-12-06 07:06:00.125994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:39:27.794 [2024-12-06 07:06:00.126003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.127220] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:27.794 [2024-12-06 07:06:00.141891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.141930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:27.794 [2024-12-06 07:06:00.141961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.672 ms 00:39:27.794 [2024-12-06 07:06:00.141972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.142082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.142102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:27.794 [2024-12-06 07:06:00.142114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:39:27.794 [2024-12-06 07:06:00.142123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.146349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.794 [2024-12-06 07:06:00.146385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:27.794 [2024-12-06 07:06:00.146415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:39:27.794 [2024-12-06 07:06:00.146425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.794 [2024-12-06 07:06:00.146530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.146548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:27.795 [2024-12-06 07:06:00.146559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:39:27.795 [2024-12-06 07:06:00.146569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.146606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.146619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:27.795 [2024-12-06 07:06:00.146629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:27.795 [2024-12-06 07:06:00.146638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.146665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:27.795 [2024-12-06 07:06:00.150695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.150754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:27.795 [2024-12-06 07:06:00.150785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:39:27.795 [2024-12-06 07:06:00.150795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.150892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.150909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:27.795 [2024-12-06 07:06:00.150921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:39:27.795 [2024-12-06 07:06:00.150931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.150972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:27.795 [2024-12-06 07:06:00.150998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:27.795 [2024-12-06 07:06:00.151070] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:27.795 [2024-12-06 07:06:00.151090] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:27.795 [2024-12-06 07:06:00.151212] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:27.795 [2024-12-06 07:06:00.151447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:27.795 [2024-12-06 07:06:00.151469] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:27.795 [2024-12-06 07:06:00.151490] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:27.795 [2024-12-06 07:06:00.151504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:27.795 [2024-12-06 07:06:00.151515] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:27.795 [2024-12-06 07:06:00.151525] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:27.795 [2024-12-06 07:06:00.151535] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:27.795 [2024-12-06 07:06:00.151544] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:27.795 [2024-12-06 07:06:00.151556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.151567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:27.795 [2024-12-06 07:06:00.151578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:39:27.795 [2024-12-06 07:06:00.151588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.151687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.795 [2024-12-06 07:06:00.151735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:27.795 [2024-12-06 07:06:00.151749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:39:27.795 [2024-12-06 07:06:00.151759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.795 [2024-12-06 07:06:00.151864] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:27.795 [2024-12-06 07:06:00.151880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:27.795 [2024-12-06 07:06:00.151891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:27.795 [2024-12-06 07:06:00.151902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.796 [2024-12-06 07:06:00.151912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:27.796 [2024-12-06 07:06:00.151921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:27.796 [2024-12-06 07:06:00.151930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:27.796 [2024-12-06 07:06:00.151939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:27.796 [2024-12-06 07:06:00.151948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:27.796 [2024-12-06 07:06:00.151957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:27.796 [2024-12-06 07:06:00.151966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:27.796 [2024-12-06 07:06:00.151988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:27.796 [2024-12-06 07:06:00.151997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:27.796 [2024-12-06 07:06:00.152007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:27.796 [2024-12-06 07:06:00.152016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:27.796 [2024-12-06 07:06:00.152025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:27.796 [2024-12-06 07:06:00.152043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:27.796 [2024-12-06 07:06:00.152052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:27.796 [2024-12-06 07:06:00.152072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:27.796 [2024-12-06 07:06:00.152090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:27.796 [2024-12-06 07:06:00.152099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:27.796 [2024-12-06 07:06:00.152117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:27.796 [2024-12-06 07:06:00.152126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:27.796 [2024-12-06 07:06:00.152144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:27.796 [2024-12-06 07:06:00.152152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:27.796 [2024-12-06 07:06:00.152170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:27.796 [2024-12-06 07:06:00.152179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:27.796 [2024-12-06 07:06:00.152188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:27.797 [2024-12-06 07:06:00.152197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:27.797 [2024-12-06 07:06:00.152206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:27.797 [2024-12-06 07:06:00.152215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:27.797 [2024-12-06 07:06:00.152224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:27.797 [2024-12-06 07:06:00.152233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:27.797 [2024-12-06 07:06:00.152241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.797 [2024-12-06 07:06:00.152281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:27.797 [2024-12-06 07:06:00.152292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:27.797 [2024-12-06 07:06:00.152302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.797 [2024-12-06 07:06:00.152313] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:27.797 [2024-12-06 07:06:00.152324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:27.797 [2024-12-06 07:06:00.152341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:27.797 [2024-12-06 07:06:00.152352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:27.797 [2024-12-06 07:06:00.152363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:27.797 [2024-12-06 07:06:00.152374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:27.797 [2024-12-06 07:06:00.152384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:27.797 [2024-12-06 07:06:00.152394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:27.797 [2024-12-06 07:06:00.152404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:27.797 [2024-12-06 07:06:00.152416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:27.797 [2024-12-06 07:06:00.152429] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:27.797 [2024-12-06 07:06:00.152443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:27.797 [2024-12-06 07:06:00.152456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:27.799 [2024-12-06 07:06:00.152467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:27.799 [2024-12-06 07:06:00.152479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:27.799 [2024-12-06 07:06:00.152490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:27.799 [2024-12-06 07:06:00.152501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:27.799 [2024-12-06 07:06:00.152512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:27.799 [2024-12-06 07:06:00.152523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:27.799 [2024-12-06 07:06:00.152533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:27.799 [2024-12-06 07:06:00.152544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:27.800 [2024-12-06 07:06:00.152555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:27.800 [2024-12-06 07:06:00.152638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:27.800 [2024-12-06 07:06:00.152664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:27.800 [2024-12-06 07:06:00.152684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:27.800 [2024-12-06 07:06:00.152709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:27.800 [2024-12-06 07:06:00.152720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:27.800 [2024-12-06 07:06:00.152730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.152747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:27.800 [2024-12-06 07:06:00.152758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:39:27.800 [2024-12-06 07:06:00.152768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.180156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.180206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:27.800 [2024-12-06 07:06:00.180222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.279 ms 00:39:27.800 [2024-12-06 07:06:00.180232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.180437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.180458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:27.800 [2024-12-06 07:06:00.180472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:39:27.800 [2024-12-06 07:06:00.180496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.227220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.227267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:27.800 [2024-12-06 07:06:00.227288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.692 ms 00:39:27.800 [2024-12-06 07:06:00.227298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.227416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.227433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:27.800 [2024-12-06 07:06:00.227444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:27.800 [2024-12-06 07:06:00.227454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.227786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.227803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:27.800 [2024-12-06 07:06:00.227837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:39:27.800 [2024-12-06 07:06:00.227846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.227982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.227999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:27.800 [2024-12-06 07:06:00.228009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:39:27.800 [2024-12-06 07:06:00.228019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.800 [2024-12-06 07:06:00.242158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.800 [2024-12-06 07:06:00.242193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:27.801 [2024-12-06 07:06:00.242208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.113 ms 00:39:27.801 [2024-12-06 07:06:00.242217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.255703] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:27.801 [2024-12-06 07:06:00.255787] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:27.801 [2024-12-06 07:06:00.255806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.255816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:27.801 [2024-12-06 07:06:00.255828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.486 ms 00:39:27.801 [2024-12-06 07:06:00.255837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.280596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.280812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:27.801 [2024-12-06 07:06:00.280837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.641 ms 00:39:27.801 [2024-12-06 07:06:00.280849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.293723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.293758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:27.801 [2024-12-06 07:06:00.293772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:39:27.801 [2024-12-06 07:06:00.293781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.306308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.306344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:27.801 [2024-12-06 07:06:00.306358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:39:27.801 [2024-12-06 07:06:00.306367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.307086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.307131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:27.801 [2024-12-06 07:06:00.307160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:39:27.801 [2024-12-06 07:06:00.307170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.368552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:27.801 [2024-12-06 07:06:00.368939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:27.801 [2024-12-06 07:06:00.368969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.349 ms 00:39:27.801 [2024-12-06 07:06:00.368981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:27.801 [2024-12-06 07:06:00.380058] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:28.061 [2024-12-06 07:06:00.391936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.391993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:28.061 [2024-12-06 07:06:00.392009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:39:28.061 [2024-12-06 07:06:00.392026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.392145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.392178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:28.061 [2024-12-06 07:06:00.392190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:28.061 [2024-12-06 07:06:00.392200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.392303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.392321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:28.061 [2024-12-06 07:06:00.392334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:39:28.061 [2024-12-06 07:06:00.392350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.392413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.392430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:28.061 [2024-12-06 07:06:00.392443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:39:28.061 [2024-12-06 07:06:00.392454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.392499] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:28.061 [2024-12-06 07:06:00.392515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.392527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:28.061 [2024-12-06 07:06:00.392538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:28.061 [2024-12-06 07:06:00.392548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.419023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.419065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:28.061 [2024-12-06 07:06:00.419112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.403 ms 00:39:28.061 [2024-12-06 07:06:00.419122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.419232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.419251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:28.061 [2024-12-06 07:06:00.419262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:39:28.061 [2024-12-06 07:06:00.419272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.420562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:28.061 [2024-12-06 07:06:00.424108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.633 ms, result 0 00:39:28.061 [2024-12-06 07:06:00.425006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:28.061 [2024-12-06 07:06:00.439281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:28.061  [2024-12-06T07:06:00.652Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-06 07:06:00.622488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:28.061 [2024-12-06 07:06:00.632156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.632194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:28.061 [2024-12-06 07:06:00.632232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:28.061 [2024-12-06 07:06:00.632242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.632296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:28.061 [2024-12-06 07:06:00.635165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.635362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:28.061 [2024-12-06 07:06:00.635386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.850 ms 00:39:28.061 [2024-12-06 07:06:00.635396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.637108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.637143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:28.061 [2024-12-06 07:06:00.637157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:39:28.061 [2024-12-06 07:06:00.637167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.640510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.640546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:28.061 [2024-12-06 07:06:00.640576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:39:28.061 [2024-12-06 07:06:00.640587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.061 [2024-12-06 07:06:00.646838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.061 [2024-12-06 07:06:00.646866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:28.061 [2024-12-06 07:06:00.646895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:39:28.061 [2024-12-06 07:06:00.646904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.673246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.673281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:28.321 [2024-12-06 07:06:00.673312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.279 ms 00:39:28.321 [2024-12-06 07:06:00.673321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.689681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.689764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:28.321 [2024-12-06 07:06:00.689798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.285 ms 00:39:28.321 [2024-12-06 07:06:00.689809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.689963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.689998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:28.321 [2024-12-06 07:06:00.690023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:39:28.321 [2024-12-06 07:06:00.690035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.720150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.720195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:28.321 [2024-12-06 07:06:00.720225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.092 ms 00:39:28.321 [2024-12-06 07:06:00.720234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.746565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.746600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:28.321 [2024-12-06 07:06:00.746630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.238 ms 00:39:28.321 [2024-12-06 07:06:00.746639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.771575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.771611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:28.321 [2024-12-06 07:06:00.771641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.897 ms 00:39:28.321 [2024-12-06 07:06:00.771650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.796756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.321 [2024-12-06 07:06:00.796799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:28.321 [2024-12-06 07:06:00.796830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.010 ms 00:39:28.321 [2024-12-06 07:06:00.796839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.321 [2024-12-06 07:06:00.796878] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:28.321 [2024-12-06 07:06:00.796898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:28.321 [2024-12-06 07:06:00.796966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.796976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.796985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.796995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:28.322 [2024-12-06 07:06:00.797907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:28.323 [2024-12-06 07:06:00.797917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:28.323 [2024-12-06 07:06:00.797926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:28.323 [2024-12-06 07:06:00.797936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:28.323 [2024-12-06 07:06:00.797961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:28.323 [2024-12-06 07:06:00.797971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:28.323 [2024-12-06 07:06:00.797981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:28.323 [2024-12-06 07:06:00.797990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:28.323 [2024-12-06 07:06:00.797999] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:28.323 [2024-12-06 07:06:00.798008] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:28.323 [2024-12-06 07:06:00.798017] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:28.323 [2024-12-06 07:06:00.798026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:28.323 [2024-12-06 07:06:00.798040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:28.323 [2024-12-06 07:06:00.798049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:28.323 [2024-12-06 07:06:00.798058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:28.323 [2024-12-06 07:06:00.798067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.323 [2024-12-06 07:06:00.798076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:28.323 [2024-12-06 07:06:00.798087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:39:28.323 [2024-12-06 07:06:00.798097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.811733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.323 [2024-12-06 07:06:00.811764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:28.323 [2024-12-06 07:06:00.811794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.596 ms 00:39:28.323 [2024-12-06 07:06:00.811803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.812174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:28.323 [2024-12-06 07:06:00.812193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:28.323 [2024-12-06 07:06:00.812204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:39:28.323 [2024-12-06 07:06:00.812214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.848946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.323 [2024-12-06 07:06:00.848984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:28.323 [2024-12-06 07:06:00.849015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.323 [2024-12-06 07:06:00.849030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.849098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.323 [2024-12-06 07:06:00.849113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:28.323 [2024-12-06 07:06:00.849123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.323 [2024-12-06 07:06:00.849132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.849180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.323 [2024-12-06 07:06:00.849195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:28.323 [2024-12-06 07:06:00.849206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.323 [2024-12-06 07:06:00.849215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.323 [2024-12-06 07:06:00.849241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.323 [2024-12-06 07:06:00.849253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:28.323 [2024-12-06 07:06:00.849262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.323 [2024-12-06 07:06:00.849271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.931911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.931969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:28.581 [2024-12-06 07:06:00.932000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.932016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.998545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.998590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:28.581 [2024-12-06 07:06:00.998620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.998630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.998693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.998708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:28.581 [2024-12-06 07:06:00.998719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.998766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.998798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.998816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:28.581 [2024-12-06 07:06:00.998841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.998850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.998959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.998976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:28.581 [2024-12-06 07:06:00.998987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.998997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.999043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.999058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:28.581 [2024-12-06 07:06:00.999075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.999085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.999176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.999204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:28.581 [2024-12-06 07:06:00.999215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.999225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.999275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:28.581 [2024-12-06 07:06:00.999296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:28.581 [2024-12-06 07:06:00.999307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:28.581 [2024-12-06 07:06:00.999317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:28.581 [2024-12-06 07:06:00.999470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.300 ms, result 0 00:39:29.515 00:39:29.515 00:39:29.515 07:06:01 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78483 00:39:29.515 07:06:01 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:39:29.515 07:06:01 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78483 00:39:29.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.515 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78483 ']' 00:39:29.516 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.516 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.516 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.516 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.516 07:06:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:29.516 [2024-12-06 07:06:01.918374] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:29.516 [2024-12-06 07:06:01.918837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78483 ] 00:39:29.516 [2024-12-06 07:06:02.096091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.773 [2024-12-06 07:06:02.184811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.340 07:06:02 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:30.340 07:06:02 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:39:30.340 07:06:02 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:39:30.597 [2024-12-06 07:06:03.169057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:30.597 [2024-12-06 07:06:03.169143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:30.855 [2024-12-06 07:06:03.350085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.350149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:30.855 [2024-12-06 07:06:03.350191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:30.855 [2024-12-06 07:06:03.350203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.353495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.353535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:30.855 [2024-12-06 07:06:03.353570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:39:30.855 [2024-12-06 07:06:03.353580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.353771] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:30.855 [2024-12-06 07:06:03.354788] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:30.855 [2024-12-06 07:06:03.354844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.354859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:30.855 [2024-12-06 07:06:03.354873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:39:30.855 [2024-12-06 07:06:03.354883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.355997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:30.855 [2024-12-06 07:06:03.369681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.369788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:30.855 [2024-12-06 07:06:03.369808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.689 ms 00:39:30.855 [2024-12-06 07:06:03.369839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.369974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.370003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:30.855 [2024-12-06 07:06:03.370017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:39:30.855 [2024-12-06 07:06:03.370049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.374270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.374337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:30.855 [2024-12-06 07:06:03.374354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:39:30.855 [2024-12-06 07:06:03.374370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.855 [2024-12-06 07:06:03.374531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.855 [2024-12-06 07:06:03.374559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:30.856 [2024-12-06 07:06:03.374574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:39:30.856 [2024-12-06 07:06:03.374598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.374631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.856 [2024-12-06 07:06:03.374653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:30.856 [2024-12-06 07:06:03.374665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:30.856 [2024-12-06 07:06:03.374680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.374733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:30.856 [2024-12-06 07:06:03.378655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.856 [2024-12-06 07:06:03.378689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:30.856 [2024-12-06 07:06:03.378765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:39:30.856 [2024-12-06 07:06:03.378782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.378855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.856 [2024-12-06 07:06:03.378874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:30.856 [2024-12-06 07:06:03.378892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:30.856 [2024-12-06 07:06:03.378908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.378941] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:30.856 [2024-12-06 07:06:03.378973] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:30.856 [2024-12-06 07:06:03.379029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:30.856 [2024-12-06 07:06:03.379052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:30.856 [2024-12-06 07:06:03.379166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:30.856 [2024-12-06 07:06:03.379182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:30.856 [2024-12-06 07:06:03.379207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:30.856 [2024-12-06 07:06:03.379221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379238] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379250] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:30.856 [2024-12-06 07:06:03.379266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:30.856 [2024-12-06 07:06:03.379277] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:30.856 [2024-12-06 07:06:03.379294] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:30.856 [2024-12-06 07:06:03.379306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.856 [2024-12-06 07:06:03.379320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:30.856 [2024-12-06 07:06:03.379332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:39:30.856 [2024-12-06 07:06:03.379348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.379435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.856 [2024-12-06 07:06:03.379456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:30.856 [2024-12-06 07:06:03.379469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:39:30.856 [2024-12-06 07:06:03.379484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.856 [2024-12-06 07:06:03.379579] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:30.856 [2024-12-06 07:06:03.379601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:30.856 [2024-12-06 07:06:03.379614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:30.856 [2024-12-06 07:06:03.379657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:30.856 [2024-12-06 07:06:03.379698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:30.856 [2024-12-06 07:06:03.379764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:30.856 [2024-12-06 07:06:03.379780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:30.856 [2024-12-06 07:06:03.379791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:30.856 [2024-12-06 07:06:03.379805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:30.856 [2024-12-06 07:06:03.379817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:30.856 [2024-12-06 07:06:03.379847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:30.856 [2024-12-06 07:06:03.379873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:30.856 [2024-12-06 07:06:03.379930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.856 [2024-12-06 07:06:03.379956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:30.856 [2024-12-06 07:06:03.379974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:30.856 [2024-12-06 07:06:03.379985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.856 [2024-12-06 07:06:03.380000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:30.856 [2024-12-06 07:06:03.380011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.856 [2024-12-06 07:06:03.380067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:30.856 [2024-12-06 07:06:03.380084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.856 [2024-12-06 07:06:03.380126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:30.856 [2024-12-06 07:06:03.380152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:30.856 [2024-12-06 07:06:03.380179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:30.856 [2024-12-06 07:06:03.380194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:30.856 [2024-12-06 07:06:03.380218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:30.856 [2024-12-06 07:06:03.380233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:30.856 [2024-12-06 07:06:03.380255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:30.856 [2024-12-06 07:06:03.380311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:30.856 [2024-12-06 07:06:03.380343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:30.856 [2024-12-06 07:06:03.380355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380388] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:30.856 [2024-12-06 07:06:03.380407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:30.856 [2024-12-06 07:06:03.380425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:30.856 [2024-12-06 07:06:03.380438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.856 [2024-12-06 07:06:03.380457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:30.856 [2024-12-06 07:06:03.380470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:30.856 [2024-12-06 07:06:03.380486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:30.856 [2024-12-06 07:06:03.380499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:30.856 [2024-12-06 07:06:03.380515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:30.856 [2024-12-06 07:06:03.380529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:30.856 [2024-12-06 07:06:03.380548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:30.856 [2024-12-06 07:06:03.380579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:30.856 [2024-12-06 07:06:03.380618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:30.856 [2024-12-06 07:06:03.380631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:30.856 [2024-12-06 07:06:03.380662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:30.856 [2024-12-06 07:06:03.380675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:30.856 [2024-12-06 07:06:03.380691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:30.856 [2024-12-06 07:06:03.380703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:30.856 [2024-12-06 07:06:03.380741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:30.856 [2024-12-06 07:06:03.380754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:30.856 [2024-12-06 07:06:03.380770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:30.857 [2024-12-06 07:06:03.380782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:30.857 [2024-12-06 07:06:03.380855] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:30.857 [2024-12-06 07:06:03.380868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:30.857 [2024-12-06 07:06:03.380902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:30.857 [2024-12-06 07:06:03.380918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:30.857 [2024-12-06 07:06:03.380930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:30.857 [2024-12-06 07:06:03.380948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.857 [2024-12-06 07:06:03.380961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:30.857 [2024-12-06 07:06:03.380977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:39:30.857 [2024-12-06 07:06:03.380994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.857 [2024-12-06 07:06:03.409498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.857 [2024-12-06 07:06:03.409551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:30.857 [2024-12-06 07:06:03.409589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.400 ms 00:39:30.857 [2024-12-06 07:06:03.409603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.857 [2024-12-06 07:06:03.409810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.857 [2024-12-06 07:06:03.409831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:30.857 [2024-12-06 07:06:03.409847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:39:30.857 [2024-12-06 07:06:03.409873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.114 [2024-12-06 07:06:03.445014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.445068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:31.115 [2024-12-06 07:06:03.445105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.109 ms 00:39:31.115 [2024-12-06 07:06:03.445117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.445229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.445249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:31.115 [2024-12-06 07:06:03.445278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:31.115 [2024-12-06 07:06:03.445289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.445618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.445637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:31.115 [2024-12-06 07:06:03.445651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:39:31.115 [2024-12-06 07:06:03.445660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.445829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.445865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:31.115 [2024-12-06 07:06:03.445879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:39:31.115 [2024-12-06 07:06:03.445889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.461921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.462144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:31.115 [2024-12-06 07:06:03.462183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.998 ms 00:39:31.115 [2024-12-06 07:06:03.462198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.485681] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:31.115 [2024-12-06 07:06:03.485748] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:31.115 [2024-12-06 07:06:03.485786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.485798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:31.115 [2024-12-06 07:06:03.485812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.451 ms 00:39:31.115 [2024-12-06 07:06:03.485847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.510039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.510092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:31.115 [2024-12-06 07:06:03.510129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.092 ms 00:39:31.115 [2024-12-06 07:06:03.510140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.523145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.523182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:31.115 [2024-12-06 07:06:03.523219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:39:31.115 [2024-12-06 07:06:03.523229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.536221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.536439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:31.115 [2024-12-06 07:06:03.536474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.910 ms 00:39:31.115 [2024-12-06 07:06:03.536488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.537339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.537378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:31.115 [2024-12-06 07:06:03.537397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:39:31.115 [2024-12-06 07:06:03.537409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.598460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.598779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:31.115 [2024-12-06 07:06:03.598815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.016 ms 00:39:31.115 [2024-12-06 07:06:03.598829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.609536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:31.115 [2024-12-06 07:06:03.621607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.621688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:31.115 [2024-12-06 07:06:03.621711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.658 ms 00:39:31.115 [2024-12-06 07:06:03.621755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.621891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.621913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:31.115 [2024-12-06 07:06:03.621928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:31.115 [2024-12-06 07:06:03.621941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.622000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.622019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:31.115 [2024-12-06 07:06:03.622031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:39:31.115 [2024-12-06 07:06:03.622045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.622075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.622122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:31.115 [2024-12-06 07:06:03.622149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:31.115 [2024-12-06 07:06:03.622180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.622221] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:31.115 [2024-12-06 07:06:03.622242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.622256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:31.115 [2024-12-06 07:06:03.622269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:39:31.115 [2024-12-06 07:06:03.622280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.647930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.648132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:31.115 [2024-12-06 07:06:03.648167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:39:31.115 [2024-12-06 07:06:03.648180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.648350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.115 [2024-12-06 07:06:03.648373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:31.115 [2024-12-06 07:06:03.648389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:39:31.115 [2024-12-06 07:06:03.648404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.115 [2024-12-06 07:06:03.649659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:31.115 [2024-12-06 07:06:03.653269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.184 ms, result 0 00:39:31.115 [2024-12-06 07:06:03.654610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:31.115 Some configs were skipped because the RPC state that can call them passed over. 00:39:31.115 07:06:03 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:39:31.680 [2024-12-06 07:06:03.966614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.680 [2024-12-06 07:06:03.966914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:31.680 [2024-12-06 07:06:03.967041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:39:31.680 [2024-12-06 07:06:03.967193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.680 [2024-12-06 07:06:03.967302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.110 ms, result 0 00:39:31.680 true 00:39:31.680 07:06:03 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:39:31.680 [2024-12-06 07:06:04.174584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:31.680 [2024-12-06 07:06:04.174857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:31.680 [2024-12-06 07:06:04.174986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:39:31.680 [2024-12-06 07:06:04.175134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:31.680 [2024-12-06 07:06:04.175263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.862 ms, result 0 00:39:31.680 true 00:39:31.680 07:06:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78483 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78483 ']' 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78483 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78483 00:39:31.680 killing process with pid 78483 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78483' 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78483 00:39:31.680 07:06:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78483 00:39:32.616 [2024-12-06 07:06:04.968102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.968177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:32.616 [2024-12-06 07:06:04.968197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:32.616 [2024-12-06 07:06:04.968210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:04.968241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:32.616 [2024-12-06 07:06:04.971182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.971213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:32.616 [2024-12-06 07:06:04.971247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:39:32.616 [2024-12-06 07:06:04.971257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:04.971534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.971553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:32.616 [2024-12-06 07:06:04.971567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:39:32.616 [2024-12-06 07:06:04.971578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:04.975242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.975285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:32.616 [2024-12-06 07:06:04.975308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:39:32.616 [2024-12-06 07:06:04.975320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:04.981559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.981773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:32.616 [2024-12-06 07:06:04.981807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:39:32.616 [2024-12-06 07:06:04.981820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:04.992211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:04.992495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:32.616 [2024-12-06 07:06:04.992532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.326 ms 00:39:32.616 [2024-12-06 07:06:04.992545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.000333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.000373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:32.616 [2024-12-06 07:06:05.000408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:39:32.616 [2024-12-06 07:06:05.000419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.000571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.000590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:32.616 [2024-12-06 07:06:05.000605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:39:32.616 [2024-12-06 07:06:05.000615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.011360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.011396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:32.616 [2024-12-06 07:06:05.011429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.719 ms 00:39:32.616 [2024-12-06 07:06:05.011439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.022363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.022398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:32.616 [2024-12-06 07:06:05.022442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.854 ms 00:39:32.616 [2024-12-06 07:06:05.022454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.032648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.032681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:32.616 [2024-12-06 07:06:05.032747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.116 ms 00:39:32.616 [2024-12-06 07:06:05.032761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.042978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.616 [2024-12-06 07:06:05.043012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:32.616 [2024-12-06 07:06:05.043049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.105 ms 00:39:32.616 [2024-12-06 07:06:05.043061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.616 [2024-12-06 07:06:05.043122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:32.616 [2024-12-06 07:06:05.043145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:32.616 [2024-12-06 07:06:05.043303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.043991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:32.617 [2024-12-06 07:06:05.044512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:32.618 [2024-12-06 07:06:05.044543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:32.618 [2024-12-06 07:06:05.044563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:32.618 [2024-12-06 07:06:05.044577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:32.618 [2024-12-06 07:06:05.044589] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:32.618 [2024-12-06 07:06:05.044600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:32.618 [2024-12-06 07:06:05.044613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:32.618 [2024-12-06 07:06:05.044638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:32.618 [2024-12-06 07:06:05.044650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:32.618 [2024-12-06 07:06:05.044660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:32.618 [2024-12-06 07:06:05.044672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:32.618 [2024-12-06 07:06:05.044682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:32.618 [2024-12-06 07:06:05.044694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.618 [2024-12-06 07:06:05.044705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:32.618 [2024-12-06 07:06:05.044718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:39:32.618 [2024-12-06 07:06:05.044728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.058368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.618 [2024-12-06 07:06:05.058404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:32.618 [2024-12-06 07:06:05.058441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.570 ms 00:39:32.618 [2024-12-06 07:06:05.058451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.058897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:32.618 [2024-12-06 07:06:05.058923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:32.618 [2024-12-06 07:06:05.058943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:39:32.618 [2024-12-06 07:06:05.058954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.105396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.618 [2024-12-06 07:06:05.105439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:32.618 [2024-12-06 07:06:05.105475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.618 [2024-12-06 07:06:05.105485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.105599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.618 [2024-12-06 07:06:05.105616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:32.618 [2024-12-06 07:06:05.105633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.618 [2024-12-06 07:06:05.105643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.105706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.618 [2024-12-06 07:06:05.105765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:32.618 [2024-12-06 07:06:05.105785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.618 [2024-12-06 07:06:05.105795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.105833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.618 [2024-12-06 07:06:05.105864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:32.618 [2024-12-06 07:06:05.105877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.618 [2024-12-06 07:06:05.105889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.618 [2024-12-06 07:06:05.186571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.618 [2024-12-06 07:06:05.186624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:32.618 [2024-12-06 07:06:05.186648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.618 [2024-12-06 07:06:05.186659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:32.877 [2024-12-06 07:06:05.253327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.253340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:32.877 [2024-12-06 07:06:05.253472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.253482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:32.877 [2024-12-06 07:06:05.253555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.253566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:32.877 [2024-12-06 07:06:05.253773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.253787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:32.877 [2024-12-06 07:06:05.253898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.253910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.253966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.253982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:32.877 [2024-12-06 07:06:05.254002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.254014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.254089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:32.877 [2024-12-06 07:06:05.254107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:32.877 [2024-12-06 07:06:05.254155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:32.877 [2024-12-06 07:06:05.254182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:32.877 [2024-12-06 07:06:05.254350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.215 ms, result 0 00:39:33.443 07:06:05 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:33.701 [2024-12-06 07:06:06.085383] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:33.701 [2024-12-06 07:06:06.085568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78536 ] 00:39:33.701 [2024-12-06 07:06:06.262487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.960 [2024-12-06 07:06:06.346409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.220 [2024-12-06 07:06:06.608478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:34.220 [2024-12-06 07:06:06.608581] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:34.220 [2024-12-06 07:06:06.764370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.764415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:34.220 [2024-12-06 07:06:06.764449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:34.220 [2024-12-06 07:06:06.764459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.767257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.767295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:34.220 [2024-12-06 07:06:06.767310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:39:34.220 [2024-12-06 07:06:06.767319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.767439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:34.220 [2024-12-06 07:06:06.768327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:34.220 [2024-12-06 07:06:06.768380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.768409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:34.220 [2024-12-06 07:06:06.768420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:39:34.220 [2024-12-06 07:06:06.768430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.769707] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:34.220 [2024-12-06 07:06:06.782536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.782575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:34.220 [2024-12-06 07:06:06.782607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.830 ms 00:39:34.220 [2024-12-06 07:06:06.782617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.782765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.782785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:34.220 [2024-12-06 07:06:06.782797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:39:34.220 [2024-12-06 07:06:06.782823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.786983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.787018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:34.220 [2024-12-06 07:06:06.787031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:39:34.220 [2024-12-06 07:06:06.787040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.787145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.787164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:34.220 [2024-12-06 07:06:06.787174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:39:34.220 [2024-12-06 07:06:06.787183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.220 [2024-12-06 07:06:06.787217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.220 [2024-12-06 07:06:06.787230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:34.221 [2024-12-06 07:06:06.787240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:34.221 [2024-12-06 07:06:06.787249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.221 [2024-12-06 07:06:06.787274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:34.221 [2024-12-06 07:06:06.790790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.221 [2024-12-06 07:06:06.790821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:34.221 [2024-12-06 07:06:06.790834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:39:34.221 [2024-12-06 07:06:06.790843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.221 [2024-12-06 07:06:06.790885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.221 [2024-12-06 07:06:06.790900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:34.221 [2024-12-06 07:06:06.790910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:34.221 [2024-12-06 07:06:06.790918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.221 [2024-12-06 07:06:06.790943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:34.221 [2024-12-06 07:06:06.790965] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:34.221 [2024-12-06 07:06:06.790999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:34.221 [2024-12-06 07:06:06.791015] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:34.221 [2024-12-06 07:06:06.791101] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:34.221 [2024-12-06 07:06:06.791113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:34.221 [2024-12-06 07:06:06.791125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:34.221 [2024-12-06 07:06:06.791141] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791160] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:34.221 [2024-12-06 07:06:06.791169] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:34.221 [2024-12-06 07:06:06.791177] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:34.221 [2024-12-06 07:06:06.791185] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:34.221 [2024-12-06 07:06:06.791195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.221 [2024-12-06 07:06:06.791203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:34.221 [2024-12-06 07:06:06.791213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:39:34.221 [2024-12-06 07:06:06.791221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.221 [2024-12-06 07:06:06.791314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.221 [2024-12-06 07:06:06.791333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:34.221 [2024-12-06 07:06:06.791343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:39:34.221 [2024-12-06 07:06:06.791352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.221 [2024-12-06 07:06:06.791445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:34.221 [2024-12-06 07:06:06.791460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:34.221 [2024-12-06 07:06:06.791470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:34.221 [2024-12-06 07:06:06.791496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:34.221 [2024-12-06 07:06:06.791522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:34.221 [2024-12-06 07:06:06.791538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:34.221 [2024-12-06 07:06:06.791557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:34.221 [2024-12-06 07:06:06.791566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:34.221 [2024-12-06 07:06:06.791574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:34.221 [2024-12-06 07:06:06.791583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:34.221 [2024-12-06 07:06:06.791593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:34.221 [2024-12-06 07:06:06.791609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:34.221 [2024-12-06 07:06:06.791633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:34.221 [2024-12-06 07:06:06.791657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:34.221 [2024-12-06 07:06:06.791682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:34.221 [2024-12-06 07:06:06.791755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:34.221 [2024-12-06 07:06:06.791786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:34.221 [2024-12-06 07:06:06.791803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:34.221 [2024-12-06 07:06:06.791812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:34.221 [2024-12-06 07:06:06.791821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:34.221 [2024-12-06 07:06:06.791829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:34.221 [2024-12-06 07:06:06.791838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:34.221 [2024-12-06 07:06:06.791847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:34.221 [2024-12-06 07:06:06.791864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:34.221 [2024-12-06 07:06:06.791876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:34.221 [2024-12-06 07:06:06.791894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:34.221 [2024-12-06 07:06:06.791908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:34.221 [2024-12-06 07:06:06.791928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:34.221 [2024-12-06 07:06:06.791937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:34.221 [2024-12-06 07:06:06.791946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:34.221 [2024-12-06 07:06:06.791955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:34.221 [2024-12-06 07:06:06.791963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:34.221 [2024-12-06 07:06:06.791972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:34.221 [2024-12-06 07:06:06.791983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:34.221 [2024-12-06 07:06:06.791995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:34.221 [2024-12-06 07:06:06.792005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:34.221 [2024-12-06 07:06:06.792014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:34.221 [2024-12-06 07:06:06.792024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:34.221 [2024-12-06 07:06:06.792033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:34.221 [2024-12-06 07:06:06.792043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:34.221 [2024-12-06 07:06:06.792052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:34.221 [2024-12-06 07:06:06.792077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:34.221 [2024-12-06 07:06:06.792087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:34.221 [2024-12-06 07:06:06.792110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:34.221 [2024-12-06 07:06:06.792120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:34.221 [2024-12-06 07:06:06.792129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:34.221 [2024-12-06 07:06:06.792138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:34.222 [2024-12-06 07:06:06.792147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:34.222 [2024-12-06 07:06:06.792157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:34.222 [2024-12-06 07:06:06.792167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:34.222 [2024-12-06 07:06:06.792177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:34.222 [2024-12-06 07:06:06.792187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:34.222 [2024-12-06 07:06:06.792212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:34.222 [2024-12-06 07:06:06.792221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:34.222 [2024-12-06 07:06:06.792231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:34.222 [2024-12-06 07:06:06.792243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.222 [2024-12-06 07:06:06.792284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:34.222 [2024-12-06 07:06:06.792295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:39:34.222 [2024-12-06 07:06:06.792305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.819902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.820170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:34.482 [2024-12-06 07:06:06.820198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.524 ms 00:39:34.482 [2024-12-06 07:06:06.820209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.820410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.820430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:34.482 [2024-12-06 07:06:06.820442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:39:34.482 [2024-12-06 07:06:06.820452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.874017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.874206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:34.482 [2024-12-06 07:06:06.874241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.521 ms 00:39:34.482 [2024-12-06 07:06:06.874253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.874384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.874414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:34.482 [2024-12-06 07:06:06.874426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:34.482 [2024-12-06 07:06:06.874436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.874833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.874880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:34.482 [2024-12-06 07:06:06.874899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:39:34.482 [2024-12-06 07:06:06.874909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.875047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.875064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:34.482 [2024-12-06 07:06:06.875074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:39:34.482 [2024-12-06 07:06:06.875099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.888838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.888874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:34.482 [2024-12-06 07:06:06.888889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.683 ms 00:39:34.482 [2024-12-06 07:06:06.888898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.901771] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:34.482 [2024-12-06 07:06:06.901809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:34.482 [2024-12-06 07:06:06.901824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.901835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:34.482 [2024-12-06 07:06:06.901845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.824 ms 00:39:34.482 [2024-12-06 07:06:06.901854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.924930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.924967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:34.482 [2024-12-06 07:06:06.924981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.996 ms 00:39:34.482 [2024-12-06 07:06:06.924991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.937485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.937520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:34.482 [2024-12-06 07:06:06.937534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:39:34.482 [2024-12-06 07:06:06.937544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.949675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.949896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:34.482 [2024-12-06 07:06:06.949922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.055 ms 00:39:34.482 [2024-12-06 07:06:06.949933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:06.950657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:06.950684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:34.482 [2024-12-06 07:06:06.950697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:39:34.482 [2024-12-06 07:06:06.950734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:07.007629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:07.007697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:34.482 [2024-12-06 07:06:07.007752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.862 ms 00:39:34.482 [2024-12-06 07:06:07.007764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.482 [2024-12-06 07:06:07.017734] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:34.482 [2024-12-06 07:06:07.028948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.482 [2024-12-06 07:06:07.029000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:34.483 [2024-12-06 07:06:07.029017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:39:34.483 [2024-12-06 07:06:07.029033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.029144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.029163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:34.483 [2024-12-06 07:06:07.029174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:34.483 [2024-12-06 07:06:07.029183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.029241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.029256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:34.483 [2024-12-06 07:06:07.029266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:39:34.483 [2024-12-06 07:06:07.029280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.029314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.029329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:34.483 [2024-12-06 07:06:07.029339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:34.483 [2024-12-06 07:06:07.029347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.029386] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:34.483 [2024-12-06 07:06:07.029402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.029411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:34.483 [2024-12-06 07:06:07.029420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:39:34.483 [2024-12-06 07:06:07.029429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.054148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.054188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:34.483 [2024-12-06 07:06:07.054204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.697 ms 00:39:34.483 [2024-12-06 07:06:07.054213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.054305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.483 [2024-12-06 07:06:07.054323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:34.483 [2024-12-06 07:06:07.054333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:39:34.483 [2024-12-06 07:06:07.054342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.483 [2024-12-06 07:06:07.055498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:34.483 [2024-12-06 07:06:07.058997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.738 ms, result 0 00:39:34.483 [2024-12-06 07:06:07.059904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:34.742 [2024-12-06 07:06:07.074565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:35.678  [2024-12-06T07:06:09.234Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T07:06:10.172Z] Copying: 45/256 [MB] (20 MBps) [2024-12-06T07:06:11.549Z] Copying: 66/256 [MB] (21 MBps) [2024-12-06T07:06:12.486Z] Copying: 88/256 [MB] (21 MBps) [2024-12-06T07:06:13.423Z] Copying: 110/256 [MB] (21 MBps) [2024-12-06T07:06:14.356Z] Copying: 131/256 [MB] (21 MBps) [2024-12-06T07:06:15.289Z] Copying: 153/256 [MB] (21 MBps) [2024-12-06T07:06:16.223Z] Copying: 174/256 [MB] (21 MBps) [2024-12-06T07:06:17.177Z] Copying: 196/256 [MB] (22 MBps) [2024-12-06T07:06:18.555Z] Copying: 218/256 [MB] (22 MBps) [2024-12-06T07:06:19.122Z] Copying: 240/256 [MB] (21 MBps) [2024-12-06T07:06:19.122Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 07:06:19.056140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:46.531 [2024-12-06 07:06:19.072701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.531 [2024-12-06 07:06:19.072769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:46.531 [2024-12-06 07:06:19.072810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:46.531 [2024-12-06 07:06:19.072821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.531 [2024-12-06 07:06:19.072852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:46.531 [2024-12-06 07:06:19.075594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.531 [2024-12-06 07:06:19.075624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:46.531 [2024-12-06 07:06:19.075653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:39:46.531 [2024-12-06 07:06:19.075662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.531 [2024-12-06 07:06:19.075983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.531 [2024-12-06 07:06:19.076003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:46.531 [2024-12-06 07:06:19.076014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:39:46.531 [2024-12-06 07:06:19.076024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.531 [2024-12-06 07:06:19.079261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.531 [2024-12-06 07:06:19.079291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:46.531 [2024-12-06 07:06:19.079304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:39:46.531 [2024-12-06 07:06:19.079313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.532 [2024-12-06 07:06:19.085346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.532 [2024-12-06 07:06:19.085374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:46.532 [2024-12-06 07:06:19.085402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.011 ms 00:39:46.532 [2024-12-06 07:06:19.085411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.532 [2024-12-06 07:06:19.110632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.532 [2024-12-06 07:06:19.110669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:46.532 [2024-12-06 07:06:19.110699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.160 ms 00:39:46.532 [2024-12-06 07:06:19.110709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.791 [2024-12-06 07:06:19.126646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.791 [2024-12-06 07:06:19.126683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:46.791 [2024-12-06 07:06:19.126752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.870 ms 00:39:46.791 [2024-12-06 07:06:19.126766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.791 [2024-12-06 07:06:19.126940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.791 [2024-12-06 07:06:19.126960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:46.791 [2024-12-06 07:06:19.126984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:39:46.791 [2024-12-06 07:06:19.126994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.791 [2024-12-06 07:06:19.156425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.791 [2024-12-06 07:06:19.156464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:46.792 [2024-12-06 07:06:19.156496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.409 ms 00:39:46.792 [2024-12-06 07:06:19.156508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.792 [2024-12-06 07:06:19.184691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.792 [2024-12-06 07:06:19.184763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:46.792 [2024-12-06 07:06:19.184794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.072 ms 00:39:46.792 [2024-12-06 07:06:19.184804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.792 [2024-12-06 07:06:19.209461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.792 [2024-12-06 07:06:19.209497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:46.792 [2024-12-06 07:06:19.209527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.599 ms 00:39:46.792 [2024-12-06 07:06:19.209536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.792 [2024-12-06 07:06:19.234234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.792 [2024-12-06 07:06:19.234269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:46.792 [2024-12-06 07:06:19.234282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.632 ms 00:39:46.792 [2024-12-06 07:06:19.234291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.792 [2024-12-06 07:06:19.234328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:46.792 [2024-12-06 07:06:19.234347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.234998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:46.792 [2024-12-06 07:06:19.235156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:46.793 [2024-12-06 07:06:19.235450] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:46.793 [2024-12-06 07:06:19.235460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a47c3814-54f8-4eb2-8588-0c95ee6f413a 00:39:46.793 [2024-12-06 07:06:19.235469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:46.793 [2024-12-06 07:06:19.235479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:46.793 [2024-12-06 07:06:19.235488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:46.793 [2024-12-06 07:06:19.235498] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:46.793 [2024-12-06 07:06:19.235507] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:46.793 [2024-12-06 07:06:19.235517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:46.793 [2024-12-06 07:06:19.235532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:46.793 [2024-12-06 07:06:19.235541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:46.793 [2024-12-06 07:06:19.235550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:46.793 [2024-12-06 07:06:19.235560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.793 [2024-12-06 07:06:19.235570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:46.793 [2024-12-06 07:06:19.235580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:39:46.793 [2024-12-06 07:06:19.235589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.249361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.793 [2024-12-06 07:06:19.249392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:46.793 [2024-12-06 07:06:19.249405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.732 ms 00:39:46.793 [2024-12-06 07:06:19.249414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.249813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.793 [2024-12-06 07:06:19.249845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:46.793 [2024-12-06 07:06:19.249856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:39:46.793 [2024-12-06 07:06:19.249865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.285161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.793 [2024-12-06 07:06:19.285199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:46.793 [2024-12-06 07:06:19.285213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.793 [2024-12-06 07:06:19.285227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.285322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.793 [2024-12-06 07:06:19.285338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:46.793 [2024-12-06 07:06:19.285347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.793 [2024-12-06 07:06:19.285356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.285404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.793 [2024-12-06 07:06:19.285419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:46.793 [2024-12-06 07:06:19.285428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.793 [2024-12-06 07:06:19.285437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.285462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.793 [2024-12-06 07:06:19.285472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:46.793 [2024-12-06 07:06:19.285481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.793 [2024-12-06 07:06:19.285489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.793 [2024-12-06 07:06:19.363432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.793 [2024-12-06 07:06:19.363489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:46.793 [2024-12-06 07:06:19.363505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.793 [2024-12-06 07:06:19.363515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.429549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.429595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:47.052 [2024-12-06 07:06:19.429609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.429618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.429700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.429768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:47.052 [2024-12-06 07:06:19.429779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.429789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.429821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.429853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:47.052 [2024-12-06 07:06:19.429863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.429873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.429983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.430001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:47.052 [2024-12-06 07:06:19.430011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.430021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.430067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.430081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:47.052 [2024-12-06 07:06:19.430112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.430136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.430207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.430220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:47.052 [2024-12-06 07:06:19.430230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.430240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.430284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:47.052 [2024-12-06 07:06:19.430304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:47.052 [2024-12-06 07:06:19.430315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:47.052 [2024-12-06 07:06:19.430324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.052 [2024-12-06 07:06:19.430468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.779 ms, result 0 00:39:47.620 00:39:47.620 00:39:47.620 07:06:20 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:48.186 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:48.186 07:06:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78483 00:39:48.186 07:06:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78483 ']' 00:39:48.186 07:06:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78483 00:39:48.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78483) - No such process 00:39:48.186 Process with pid 78483 is not found 00:39:48.186 07:06:20 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78483 is not found' 00:39:48.186 00:39:48.186 real 1m8.845s 00:39:48.186 user 1m31.857s 00:39:48.186 sys 0m9.217s 00:39:48.186 ************************************ 00:39:48.186 END TEST ftl_trim 00:39:48.186 ************************************ 00:39:48.186 07:06:20 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.186 07:06:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:48.186 07:06:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:39:48.186 07:06:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:48.186 07:06:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.186 07:06:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:48.444 ************************************ 00:39:48.444 START TEST ftl_restore 00:39:48.444 ************************************ 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:39:48.444 * Looking for test storage... 00:39:48.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.444 07:06:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.444 --rc genhtml_branch_coverage=1 00:39:48.444 --rc genhtml_function_coverage=1 00:39:48.444 --rc genhtml_legend=1 00:39:48.444 --rc geninfo_all_blocks=1 00:39:48.444 --rc geninfo_unexecuted_blocks=1 00:39:48.444 00:39:48.444 ' 00:39:48.444 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.444 --rc genhtml_branch_coverage=1 00:39:48.444 --rc genhtml_function_coverage=1 00:39:48.445 --rc genhtml_legend=1 00:39:48.445 --rc geninfo_all_blocks=1 00:39:48.445 --rc geninfo_unexecuted_blocks=1 00:39:48.445 00:39:48.445 ' 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.445 --rc genhtml_branch_coverage=1 00:39:48.445 --rc genhtml_function_coverage=1 00:39:48.445 --rc genhtml_legend=1 00:39:48.445 --rc geninfo_all_blocks=1 00:39:48.445 --rc geninfo_unexecuted_blocks=1 00:39:48.445 00:39:48.445 ' 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:48.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.445 --rc genhtml_branch_coverage=1 00:39:48.445 --rc genhtml_function_coverage=1 00:39:48.445 --rc genhtml_legend=1 00:39:48.445 --rc geninfo_all_blocks=1 00:39:48.445 --rc geninfo_unexecuted_blocks=1 00:39:48.445 00:39:48.445 ' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.bdMFKYb01v 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78746 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78746 00:39:48.445 07:06:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78746 ']' 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.445 07:06:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:39:48.702 [2024-12-06 07:06:21.123161] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:39:48.703 [2024-12-06 07:06:21.123587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78746 ] 00:39:48.960 [2024-12-06 07:06:21.304307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.960 [2024-12-06 07:06:21.384523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.526 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.526 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:39:49.526 07:06:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:50.092 07:06:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:50.092 07:06:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:39:50.092 07:06:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:50.092 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:39:50.092 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:50.092 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:50.092 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:50.092 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:39:50.351 { 00:39:50.351 "name": "nvme0n1", 00:39:50.351 "aliases": [ 00:39:50.351 "2bdc34a2-0d79-4798-9664-c345e1992d42" 00:39:50.351 ], 00:39:50.351 "product_name": "NVMe disk", 00:39:50.351 "block_size": 4096, 00:39:50.351 "num_blocks": 1310720, 00:39:50.351 "uuid": "2bdc34a2-0d79-4798-9664-c345e1992d42", 00:39:50.351 "numa_id": -1, 00:39:50.351 "assigned_rate_limits": { 00:39:50.351 "rw_ios_per_sec": 0, 00:39:50.351 "rw_mbytes_per_sec": 0, 00:39:50.351 "r_mbytes_per_sec": 0, 00:39:50.351 "w_mbytes_per_sec": 0 00:39:50.351 }, 00:39:50.351 "claimed": true, 00:39:50.351 "claim_type": "read_many_write_one", 00:39:50.351 "zoned": false, 00:39:50.351 "supported_io_types": { 00:39:50.351 "read": true, 00:39:50.351 "write": true, 00:39:50.351 "unmap": true, 00:39:50.351 "flush": true, 00:39:50.351 "reset": true, 00:39:50.351 "nvme_admin": true, 00:39:50.351 "nvme_io": true, 00:39:50.351 "nvme_io_md": false, 00:39:50.351 "write_zeroes": true, 00:39:50.351 "zcopy": false, 00:39:50.351 "get_zone_info": false, 00:39:50.351 "zone_management": false, 00:39:50.351 "zone_append": false, 00:39:50.351 "compare": true, 00:39:50.351 "compare_and_write": false, 00:39:50.351 "abort": true, 00:39:50.351 "seek_hole": false, 00:39:50.351 "seek_data": false, 00:39:50.351 "copy": true, 00:39:50.351 "nvme_iov_md": false 00:39:50.351 }, 00:39:50.351 "driver_specific": { 00:39:50.351 "nvme": [ 00:39:50.351 { 00:39:50.351 "pci_address": "0000:00:11.0", 00:39:50.351 "trid": { 00:39:50.351 "trtype": "PCIe", 00:39:50.351 "traddr": "0000:00:11.0" 00:39:50.351 }, 00:39:50.351 "ctrlr_data": { 00:39:50.351 "cntlid": 0, 00:39:50.351 "vendor_id": "0x1b36", 00:39:50.351 "model_number": "QEMU NVMe Ctrl", 00:39:50.351 "serial_number": "12341", 00:39:50.351 "firmware_revision": "8.0.0", 00:39:50.351 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:50.351 "oacs": { 00:39:50.351 "security": 0, 00:39:50.351 "format": 1, 00:39:50.351 "firmware": 0, 00:39:50.351 "ns_manage": 1 00:39:50.351 }, 00:39:50.351 "multi_ctrlr": false, 00:39:50.351 "ana_reporting": false 00:39:50.351 }, 00:39:50.351 "vs": { 00:39:50.351 "nvme_version": "1.4" 00:39:50.351 }, 00:39:50.351 "ns_data": { 00:39:50.351 "id": 1, 00:39:50.351 "can_share": false 00:39:50.351 } 00:39:50.351 } 00:39:50.351 ], 00:39:50.351 "mp_policy": "active_passive" 00:39:50.351 } 00:39:50.351 } 00:39:50.351 ]' 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:39:50.351 07:06:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:39:50.351 07:06:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:39:50.351 07:06:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:50.351 07:06:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:39:50.351 07:06:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:50.351 07:06:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:50.609 07:06:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=57f8b668-eb7a-41a6-a72a-abc20374f8aa 00:39:50.609 07:06:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:39:50.609 07:06:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57f8b668-eb7a-41a6-a72a-abc20374f8aa 00:39:50.867 07:06:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:51.126 07:06:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef 00:39:51.126 07:06:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:39:51.386 07:06:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.386 07:06:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.386 07:06:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:51.386 07:06:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:51.386 07:06:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:51.386 07:06:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:39:51.644 { 00:39:51.644 "name": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:51.644 "aliases": [ 00:39:51.644 "lvs/nvme0n1p0" 00:39:51.644 ], 00:39:51.644 "product_name": "Logical Volume", 00:39:51.644 "block_size": 4096, 00:39:51.644 "num_blocks": 26476544, 00:39:51.644 "uuid": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:51.644 "assigned_rate_limits": { 00:39:51.644 "rw_ios_per_sec": 0, 00:39:51.644 "rw_mbytes_per_sec": 0, 00:39:51.644 "r_mbytes_per_sec": 0, 00:39:51.644 "w_mbytes_per_sec": 0 00:39:51.644 }, 00:39:51.644 "claimed": false, 00:39:51.644 "zoned": false, 00:39:51.644 "supported_io_types": { 00:39:51.644 "read": true, 00:39:51.644 "write": true, 00:39:51.644 "unmap": true, 00:39:51.644 "flush": false, 00:39:51.644 "reset": true, 00:39:51.644 "nvme_admin": false, 00:39:51.644 "nvme_io": false, 00:39:51.644 "nvme_io_md": false, 00:39:51.644 "write_zeroes": true, 00:39:51.644 "zcopy": false, 00:39:51.644 "get_zone_info": false, 00:39:51.644 "zone_management": false, 00:39:51.644 "zone_append": false, 00:39:51.644 "compare": false, 00:39:51.644 "compare_and_write": false, 00:39:51.644 "abort": false, 00:39:51.644 "seek_hole": true, 00:39:51.644 "seek_data": true, 00:39:51.644 "copy": false, 00:39:51.644 "nvme_iov_md": false 00:39:51.644 }, 00:39:51.644 "driver_specific": { 00:39:51.644 "lvol": { 00:39:51.644 "lvol_store_uuid": "fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef", 00:39:51.644 "base_bdev": "nvme0n1", 00:39:51.644 "thin_provision": true, 00:39:51.644 "num_allocated_clusters": 0, 00:39:51.644 "snapshot": false, 00:39:51.644 "clone": false, 00:39:51.644 "esnap_clone": false 00:39:51.644 } 00:39:51.644 } 00:39:51.644 } 00:39:51.644 ]' 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:39:51.644 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:39:51.644 07:06:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:39:51.644 07:06:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:39:51.644 07:06:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:39:51.902 07:06:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:39:51.902 07:06:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:39:51.902 07:06:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.902 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:51.903 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:51.903 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:51.903 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:51.903 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:39:52.162 { 00:39:52.162 "name": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:52.162 "aliases": [ 00:39:52.162 "lvs/nvme0n1p0" 00:39:52.162 ], 00:39:52.162 "product_name": "Logical Volume", 00:39:52.162 "block_size": 4096, 00:39:52.162 "num_blocks": 26476544, 00:39:52.162 "uuid": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:52.162 "assigned_rate_limits": { 00:39:52.162 "rw_ios_per_sec": 0, 00:39:52.162 "rw_mbytes_per_sec": 0, 00:39:52.162 "r_mbytes_per_sec": 0, 00:39:52.162 "w_mbytes_per_sec": 0 00:39:52.162 }, 00:39:52.162 "claimed": false, 00:39:52.162 "zoned": false, 00:39:52.162 "supported_io_types": { 00:39:52.162 "read": true, 00:39:52.162 "write": true, 00:39:52.162 "unmap": true, 00:39:52.162 "flush": false, 00:39:52.162 "reset": true, 00:39:52.162 "nvme_admin": false, 00:39:52.162 "nvme_io": false, 00:39:52.162 "nvme_io_md": false, 00:39:52.162 "write_zeroes": true, 00:39:52.162 "zcopy": false, 00:39:52.162 "get_zone_info": false, 00:39:52.162 "zone_management": false, 00:39:52.162 "zone_append": false, 00:39:52.162 "compare": false, 00:39:52.162 "compare_and_write": false, 00:39:52.162 "abort": false, 00:39:52.162 "seek_hole": true, 00:39:52.162 "seek_data": true, 00:39:52.162 "copy": false, 00:39:52.162 "nvme_iov_md": false 00:39:52.162 }, 00:39:52.162 "driver_specific": { 00:39:52.162 "lvol": { 00:39:52.162 "lvol_store_uuid": "fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef", 00:39:52.162 "base_bdev": "nvme0n1", 00:39:52.162 "thin_provision": true, 00:39:52.162 "num_allocated_clusters": 0, 00:39:52.162 "snapshot": false, 00:39:52.162 "clone": false, 00:39:52.162 "esnap_clone": false 00:39:52.162 } 00:39:52.162 } 00:39:52.162 } 00:39:52.162 ]' 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:39:52.162 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:39:52.162 07:06:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:39:52.162 07:06:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:39:52.421 07:06:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:39:52.421 07:06:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:52.421 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:52.421 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:52.421 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:52.421 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:52.421 07:06:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4183bdf8-366a-46b9-a3fa-61e308cbdb34 00:39:52.679 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:39:52.679 { 00:39:52.679 "name": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:52.679 "aliases": [ 00:39:52.679 "lvs/nvme0n1p0" 00:39:52.679 ], 00:39:52.679 "product_name": "Logical Volume", 00:39:52.679 "block_size": 4096, 00:39:52.679 "num_blocks": 26476544, 00:39:52.679 "uuid": "4183bdf8-366a-46b9-a3fa-61e308cbdb34", 00:39:52.679 "assigned_rate_limits": { 00:39:52.679 "rw_ios_per_sec": 0, 00:39:52.679 "rw_mbytes_per_sec": 0, 00:39:52.679 "r_mbytes_per_sec": 0, 00:39:52.679 "w_mbytes_per_sec": 0 00:39:52.679 }, 00:39:52.679 "claimed": false, 00:39:52.679 "zoned": false, 00:39:52.679 "supported_io_types": { 00:39:52.679 "read": true, 00:39:52.679 "write": true, 00:39:52.679 "unmap": true, 00:39:52.679 "flush": false, 00:39:52.679 "reset": true, 00:39:52.679 "nvme_admin": false, 00:39:52.679 "nvme_io": false, 00:39:52.679 "nvme_io_md": false, 00:39:52.679 "write_zeroes": true, 00:39:52.679 "zcopy": false, 00:39:52.679 "get_zone_info": false, 00:39:52.679 "zone_management": false, 00:39:52.679 "zone_append": false, 00:39:52.679 "compare": false, 00:39:52.679 "compare_and_write": false, 00:39:52.679 "abort": false, 00:39:52.679 "seek_hole": true, 00:39:52.679 "seek_data": true, 00:39:52.679 "copy": false, 00:39:52.679 "nvme_iov_md": false 00:39:52.679 }, 00:39:52.679 "driver_specific": { 00:39:52.679 "lvol": { 00:39:52.679 "lvol_store_uuid": "fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef", 00:39:52.679 "base_bdev": "nvme0n1", 00:39:52.679 "thin_provision": true, 00:39:52.680 "num_allocated_clusters": 0, 00:39:52.680 "snapshot": false, 00:39:52.680 "clone": false, 00:39:52.680 "esnap_clone": false 00:39:52.680 } 00:39:52.680 } 00:39:52.680 } 00:39:52.680 ]' 00:39:52.680 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:39:52.680 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:39:52.680 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:39:52.939 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:39:52.939 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:39:52.939 07:06:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4183bdf8-366a-46b9-a3fa-61e308cbdb34 --l2p_dram_limit 10' 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:39:52.939 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:39:52.939 07:06:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4183bdf8-366a-46b9-a3fa-61e308cbdb34 --l2p_dram_limit 10 -c nvc0n1p0 00:39:52.939 [2024-12-06 07:06:25.503881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.939 [2024-12-06 07:06:25.503932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:52.940 [2024-12-06 07:06:25.503969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:52.940 [2024-12-06 07:06:25.503980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.504080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.504115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:52.940 [2024-12-06 07:06:25.504129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:39:52.940 [2024-12-06 07:06:25.504138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.504177] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:52.940 [2024-12-06 07:06:25.505073] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:52.940 [2024-12-06 07:06:25.505134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.505147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:52.940 [2024-12-06 07:06:25.505160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:39:52.940 [2024-12-06 07:06:25.505172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.505337] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:39:52.940 [2024-12-06 07:06:25.506227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.506261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:39:52.940 [2024-12-06 07:06:25.506275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:52.940 [2024-12-06 07:06:25.506286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.510084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.510128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:52.940 [2024-12-06 07:06:25.510142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:39:52.940 [2024-12-06 07:06:25.510153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.510248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.510268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:52.940 [2024-12-06 07:06:25.510280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:39:52.940 [2024-12-06 07:06:25.510294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.510345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.510364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:52.940 [2024-12-06 07:06:25.510378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:52.940 [2024-12-06 07:06:25.510390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.510416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:52.940 [2024-12-06 07:06:25.514183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.514218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:52.940 [2024-12-06 07:06:25.514236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.772 ms 00:39:52.940 [2024-12-06 07:06:25.514247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.514288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.514302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:52.940 [2024-12-06 07:06:25.514314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:52.940 [2024-12-06 07:06:25.514324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.514376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:39:52.940 [2024-12-06 07:06:25.514506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:52.940 [2024-12-06 07:06:25.514526] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:52.940 [2024-12-06 07:06:25.514539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:52.940 [2024-12-06 07:06:25.514553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:52.940 [2024-12-06 07:06:25.514565] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:52.940 [2024-12-06 07:06:25.514577] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:52.940 [2024-12-06 07:06:25.514587] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:52.940 [2024-12-06 07:06:25.514602] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:52.940 [2024-12-06 07:06:25.514612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:52.940 [2024-12-06 07:06:25.514623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.514642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:52.940 [2024-12-06 07:06:25.514655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:39:52.940 [2024-12-06 07:06:25.514665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.514844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.940 [2024-12-06 07:06:25.514864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:52.940 [2024-12-06 07:06:25.514877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:39:52.940 [2024-12-06 07:06:25.514888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.940 [2024-12-06 07:06:25.514992] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:52.940 [2024-12-06 07:06:25.515010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:52.940 [2024-12-06 07:06:25.515024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:52.940 [2024-12-06 07:06:25.515035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:52.940 [2024-12-06 07:06:25.515056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:52.940 [2024-12-06 07:06:25.515076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:52.940 [2024-12-06 07:06:25.515088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:52.940 [2024-12-06 07:06:25.515110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:52.940 [2024-12-06 07:06:25.515120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:52.940 [2024-12-06 07:06:25.515131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:52.940 [2024-12-06 07:06:25.515141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:52.940 [2024-12-06 07:06:25.515152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:52.940 [2024-12-06 07:06:25.515161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:52.940 [2024-12-06 07:06:25.515184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:52.940 [2024-12-06 07:06:25.515194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:52.940 [2024-12-06 07:06:25.515215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:52.940 [2024-12-06 07:06:25.515234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:52.940 [2024-12-06 07:06:25.515244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:52.940 [2024-12-06 07:06:25.515254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:52.940 [2024-12-06 07:06:25.515263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:52.940 [2024-12-06 07:06:25.515274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:52.941 [2024-12-06 07:06:25.515309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:52.941 [2024-12-06 07:06:25.515318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:52.941 [2024-12-06 07:06:25.515337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:52.941 [2024-12-06 07:06:25.515349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:52.941 [2024-12-06 07:06:25.515368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:52.941 [2024-12-06 07:06:25.515377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:52.941 [2024-12-06 07:06:25.515388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:52.941 [2024-12-06 07:06:25.515397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:52.941 [2024-12-06 07:06:25.515408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:52.941 [2024-12-06 07:06:25.515416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:52.941 [2024-12-06 07:06:25.515435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:52.941 [2024-12-06 07:06:25.515446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515454] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:52.941 [2024-12-06 07:06:25.515465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:52.941 [2024-12-06 07:06:25.515475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:52.941 [2024-12-06 07:06:25.515487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:52.941 [2024-12-06 07:06:25.515498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:52.941 [2024-12-06 07:06:25.515511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:52.941 [2024-12-06 07:06:25.515520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:52.941 [2024-12-06 07:06:25.515531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:52.941 [2024-12-06 07:06:25.515540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:52.941 [2024-12-06 07:06:25.515550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:52.941 [2024-12-06 07:06:25.515561] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:52.941 [2024-12-06 07:06:25.515577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:52.941 [2024-12-06 07:06:25.515598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:52.941 [2024-12-06 07:06:25.515607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:52.941 [2024-12-06 07:06:25.515618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:52.941 [2024-12-06 07:06:25.515628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:52.941 [2024-12-06 07:06:25.515638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:52.941 [2024-12-06 07:06:25.515647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:52.941 [2024-12-06 07:06:25.515660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:52.941 [2024-12-06 07:06:25.515669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:52.941 [2024-12-06 07:06:25.515681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:52.941 [2024-12-06 07:06:25.515747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:52.941 [2024-12-06 07:06:25.515799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:52.941 [2024-12-06 07:06:25.515822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:52.941 [2024-12-06 07:06:25.515832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:52.941 [2024-12-06 07:06:25.515845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:52.941 [2024-12-06 07:06:25.515856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:52.941 [2024-12-06 07:06:25.515870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:52.941 [2024-12-06 07:06:25.515881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:39:52.941 [2024-12-06 07:06:25.515892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:52.941 [2024-12-06 07:06:25.515939] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:39:52.941 [2024-12-06 07:06:25.515959] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:39:55.507 [2024-12-06 07:06:27.842054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.842130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:39:55.507 [2024-12-06 07:06:27.842149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2326.130 ms 00:39:55.507 [2024-12-06 07:06:27.842162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.868201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.868290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:55.507 [2024-12-06 07:06:27.868309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.828 ms 00:39:55.507 [2024-12-06 07:06:27.868322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.868475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.868498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:55.507 [2024-12-06 07:06:27.868511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:39:55.507 [2024-12-06 07:06:27.868528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.901095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.901157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:55.507 [2024-12-06 07:06:27.901173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.517 ms 00:39:55.507 [2024-12-06 07:06:27.901185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.901225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.901246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:55.507 [2024-12-06 07:06:27.901258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:55.507 [2024-12-06 07:06:27.901280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.901626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.901646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:55.507 [2024-12-06 07:06:27.901658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:39:55.507 [2024-12-06 07:06:27.901670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.901976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.902037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:55.507 [2024-12-06 07:06:27.902184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:39:55.507 [2024-12-06 07:06:27.902238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.917034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.917226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:55.507 [2024-12-06 07:06:27.917337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.742 ms 00:39:55.507 [2024-12-06 07:06:27.917388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.938078] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:55.507 [2024-12-06 07:06:27.940502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.940535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:55.507 [2024-12-06 07:06:27.940569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.928 ms 00:39:55.507 [2024-12-06 07:06:27.940580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.999035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.999096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:39:55.507 [2024-12-06 07:06:27.999148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.399 ms 00:39:55.507 [2024-12-06 07:06:27.999160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:27.999365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:27.999386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:55.507 [2024-12-06 07:06:27.999403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:39:55.507 [2024-12-06 07:06:27.999414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:28.024605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:28.024642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:39:55.507 [2024-12-06 07:06:28.024661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.105 ms 00:39:55.507 [2024-12-06 07:06:28.024671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:28.048970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:28.049005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:39:55.507 [2024-12-06 07:06:28.049023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.220 ms 00:39:55.507 [2024-12-06 07:06:28.049033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.507 [2024-12-06 07:06:28.049590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.507 [2024-12-06 07:06:28.049611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:55.507 [2024-12-06 07:06:28.049624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:39:55.507 [2024-12-06 07:06:28.049636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.766 [2024-12-06 07:06:28.119575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.766 [2024-12-06 07:06:28.119620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:39:55.766 [2024-12-06 07:06:28.119641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.895 ms 00:39:55.766 [2024-12-06 07:06:28.119652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.766 [2024-12-06 07:06:28.144634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.766 [2024-12-06 07:06:28.144685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:39:55.766 [2024-12-06 07:06:28.144717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.866 ms 00:39:55.766 [2024-12-06 07:06:28.144747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.766 [2024-12-06 07:06:28.169050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.766 [2024-12-06 07:06:28.169085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:39:55.766 [2024-12-06 07:06:28.169101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.257 ms 00:39:55.766 [2024-12-06 07:06:28.169111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.766 [2024-12-06 07:06:28.194062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.766 [2024-12-06 07:06:28.194099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:55.766 [2024-12-06 07:06:28.194116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.908 ms 00:39:55.767 [2024-12-06 07:06:28.194126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.767 [2024-12-06 07:06:28.194175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.767 [2024-12-06 07:06:28.194191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:55.767 [2024-12-06 07:06:28.194205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:55.767 [2024-12-06 07:06:28.194215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.767 [2024-12-06 07:06:28.194300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.767 [2024-12-06 07:06:28.194319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:55.767 [2024-12-06 07:06:28.194332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:39:55.767 [2024-12-06 07:06:28.194341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.767 [2024-12-06 07:06:28.195598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2691.081 ms, result 0 00:39:55.767 { 00:39:55.767 "name": "ftl0", 00:39:55.767 "uuid": "dbb34d5e-3007-41d5-8e2b-9cd66c8b840b" 00:39:55.767 } 00:39:55.767 07:06:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:39:55.767 07:06:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:39:56.024 07:06:28 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:39:56.024 07:06:28 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:39:56.283 [2024-12-06 07:06:28.734862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.734917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:56.283 [2024-12-06 07:06:28.734936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:56.283 [2024-12-06 07:06:28.734948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.734978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:56.283 [2024-12-06 07:06:28.737697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.737734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:56.283 [2024-12-06 07:06:28.737751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:39:56.283 [2024-12-06 07:06:28.737761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.737996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.738018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:56.283 [2024-12-06 07:06:28.738031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:39:56.283 [2024-12-06 07:06:28.738056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.740671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.740698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:56.283 [2024-12-06 07:06:28.740743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:39:56.283 [2024-12-06 07:06:28.740755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.745919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.745946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:56.283 [2024-12-06 07:06:28.745963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:39:56.283 [2024-12-06 07:06:28.745973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.770167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.770211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:56.283 [2024-12-06 07:06:28.770231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:39:56.283 [2024-12-06 07:06:28.770241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.785641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.785877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:56.283 [2024-12-06 07:06:28.785910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.354 ms 00:39:56.283 [2024-12-06 07:06:28.785924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.786100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.786130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:56.283 [2024-12-06 07:06:28.786148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:39:56.283 [2024-12-06 07:06:28.786159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.811394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.811429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:56.283 [2024-12-06 07:06:28.811446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.175 ms 00:39:56.283 [2024-12-06 07:06:28.811455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.283 [2024-12-06 07:06:28.835562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.283 [2024-12-06 07:06:28.835754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:56.283 [2024-12-06 07:06:28.835784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.064 ms 00:39:56.283 [2024-12-06 07:06:28.835796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.284 [2024-12-06 07:06:28.859561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.284 [2024-12-06 07:06:28.859597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:56.284 [2024-12-06 07:06:28.859618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.707 ms 00:39:56.284 [2024-12-06 07:06:28.859628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.544 [2024-12-06 07:06:28.885456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.544 [2024-12-06 07:06:28.885492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:56.544 [2024-12-06 07:06:28.885509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.727 ms 00:39:56.544 [2024-12-06 07:06:28.885518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.544 [2024-12-06 07:06:28.885560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:56.544 [2024-12-06 07:06:28.885581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:56.544 [2024-12-06 07:06:28.885699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.885994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:56.545 [2024-12-06 07:06:28.886765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:56.546 [2024-12-06 07:06:28.886994] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:56.546 [2024-12-06 07:06:28.887007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:39:56.546 [2024-12-06 07:06:28.887018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:56.546 [2024-12-06 07:06:28.887032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:56.546 [2024-12-06 07:06:28.887045] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:56.546 [2024-12-06 07:06:28.887081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:56.546 [2024-12-06 07:06:28.887092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:56.546 [2024-12-06 07:06:28.887105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:56.546 [2024-12-06 07:06:28.887116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:56.546 [2024-12-06 07:06:28.887142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:56.546 [2024-12-06 07:06:28.887151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:56.546 [2024-12-06 07:06:28.887163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.546 [2024-12-06 07:06:28.887173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:56.546 [2024-12-06 07:06:28.887186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:39:56.546 [2024-12-06 07:06:28.887199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.900617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.546 [2024-12-06 07:06:28.900821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:56.546 [2024-12-06 07:06:28.900870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.372 ms 00:39:56.546 [2024-12-06 07:06:28.900882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.901349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:56.546 [2024-12-06 07:06:28.901373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:56.546 [2024-12-06 07:06:28.901408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:39:56.546 [2024-12-06 07:06:28.901418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.943202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:28.943242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:56.546 [2024-12-06 07:06:28.943259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:28.943269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.943326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:28.943339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:56.546 [2024-12-06 07:06:28.943354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:28.943364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.943464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:28.943482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:56.546 [2024-12-06 07:06:28.943494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:28.943504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:28.943531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:28.943543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:56.546 [2024-12-06 07:06:28.943554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:28.943629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.024252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.024527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:56.546 [2024-12-06 07:06:29.024561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.024590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.092524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.092573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:56.546 [2024-12-06 07:06:29.092607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.092620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.092728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.092783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:56.546 [2024-12-06 07:06:29.092798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.092808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.092907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.092926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:56.546 [2024-12-06 07:06:29.092940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.092951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.093068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.093085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:56.546 [2024-12-06 07:06:29.093099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.093125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.093206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.093223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:56.546 [2024-12-06 07:06:29.093238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.093248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.093296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.093311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:56.546 [2024-12-06 07:06:29.093324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.093335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.546 [2024-12-06 07:06:29.093390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:56.546 [2024-12-06 07:06:29.093406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:56.546 [2024-12-06 07:06:29.093420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:56.546 [2024-12-06 07:06:29.093431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:56.547 [2024-12-06 07:06:29.093588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.686 ms, result 0 00:39:56.547 true 00:39:56.547 07:06:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78746 00:39:56.547 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78746 ']' 00:39:56.547 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78746 00:39:56.547 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:39:56.547 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:56.547 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78746 00:39:56.806 killing process with pid 78746 00:39:56.806 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:56.806 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:56.806 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78746' 00:39:56.806 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78746 00:39:56.806 07:06:29 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78746 00:40:01.019 07:06:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:40:05.207 262144+0 records in 00:40:05.208 262144+0 records out 00:40:05.208 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.05551 s, 265 MB/s 00:40:05.208 07:06:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:07.111 07:06:39 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:07.112 [2024-12-06 07:06:39.295404] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:40:07.112 [2024-12-06 07:06:39.295606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:40:07.112 [2024-12-06 07:06:39.487576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.112 [2024-12-06 07:06:39.616455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.376 [2024-12-06 07:06:39.893409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:07.376 [2024-12-06 07:06:39.893494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:07.661 [2024-12-06 07:06:40.060517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.060579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:07.661 [2024-12-06 07:06:40.060628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:07.661 [2024-12-06 07:06:40.060639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.060750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.060775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:07.661 [2024-12-06 07:06:40.060788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:40:07.661 [2024-12-06 07:06:40.060798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.060844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:07.661 [2024-12-06 07:06:40.061819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:07.661 [2024-12-06 07:06:40.061870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.061898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:07.661 [2024-12-06 07:06:40.061909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:40:07.661 [2024-12-06 07:06:40.061920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.063106] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:07.661 [2024-12-06 07:06:40.080679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.080780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:07.661 [2024-12-06 07:06:40.080807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.572 ms 00:40:07.661 [2024-12-06 07:06:40.080826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.080979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.081009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:07.661 [2024-12-06 07:06:40.081044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:40:07.661 [2024-12-06 07:06:40.081092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.085182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.085217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:07.661 [2024-12-06 07:06:40.085247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:40:07.661 [2024-12-06 07:06:40.085270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.085370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.085388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:07.661 [2024-12-06 07:06:40.085399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:40:07.661 [2024-12-06 07:06:40.085409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.085456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.085471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:07.661 [2024-12-06 07:06:40.085482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:07.661 [2024-12-06 07:06:40.085491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.085534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:07.661 [2024-12-06 07:06:40.089197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.089228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:07.661 [2024-12-06 07:06:40.089266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:40:07.661 [2024-12-06 07:06:40.089276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.089315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.661 [2024-12-06 07:06:40.089330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:07.661 [2024-12-06 07:06:40.089341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:07.661 [2024-12-06 07:06:40.089350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.661 [2024-12-06 07:06:40.089391] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:07.661 [2024-12-06 07:06:40.089427] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:07.661 [2024-12-06 07:06:40.089466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:07.661 [2024-12-06 07:06:40.089491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:07.661 [2024-12-06 07:06:40.089585] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:07.661 [2024-12-06 07:06:40.089598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:07.661 [2024-12-06 07:06:40.089611] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:07.661 [2024-12-06 07:06:40.089623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:07.661 [2024-12-06 07:06:40.089635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:07.661 [2024-12-06 07:06:40.089645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:07.661 [2024-12-06 07:06:40.089655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:07.661 [2024-12-06 07:06:40.089673] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:07.662 [2024-12-06 07:06:40.089683] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:07.662 [2024-12-06 07:06:40.089693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.089703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:07.662 [2024-12-06 07:06:40.089766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:40:07.662 [2024-12-06 07:06:40.089776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.089859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.089872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:07.662 [2024-12-06 07:06:40.089884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:40:07.662 [2024-12-06 07:06:40.089893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.090010] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:07.662 [2024-12-06 07:06:40.090062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:07.662 [2024-12-06 07:06:40.090089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:07.662 [2024-12-06 07:06:40.090136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:07.662 [2024-12-06 07:06:40.090166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:07.662 [2024-12-06 07:06:40.090186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:07.662 [2024-12-06 07:06:40.090196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:07.662 [2024-12-06 07:06:40.090206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:07.662 [2024-12-06 07:06:40.090234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:07.662 [2024-12-06 07:06:40.090245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:07.662 [2024-12-06 07:06:40.090255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:07.662 [2024-12-06 07:06:40.090275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:07.662 [2024-12-06 07:06:40.090306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:07.662 [2024-12-06 07:06:40.090335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:07.662 [2024-12-06 07:06:40.090365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:07.662 [2024-12-06 07:06:40.090394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:07.662 [2024-12-06 07:06:40.090423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:07.662 [2024-12-06 07:06:40.090442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:07.662 [2024-12-06 07:06:40.090452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:07.662 [2024-12-06 07:06:40.090462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:07.662 [2024-12-06 07:06:40.090472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:07.662 [2024-12-06 07:06:40.090481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:07.662 [2024-12-06 07:06:40.090491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:07.662 [2024-12-06 07:06:40.090511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:07.662 [2024-12-06 07:06:40.090520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090530] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:07.662 [2024-12-06 07:06:40.090541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:07.662 [2024-12-06 07:06:40.090551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:07.662 [2024-12-06 07:06:40.090574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:07.662 [2024-12-06 07:06:40.090585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:07.662 [2024-12-06 07:06:40.090594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:07.662 [2024-12-06 07:06:40.090605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:07.662 [2024-12-06 07:06:40.090616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:07.662 [2024-12-06 07:06:40.090626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:07.662 [2024-12-06 07:06:40.090638] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:07.662 [2024-12-06 07:06:40.090651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:07.662 [2024-12-06 07:06:40.090685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:07.662 [2024-12-06 07:06:40.090696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:07.662 [2024-12-06 07:06:40.090724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:07.662 [2024-12-06 07:06:40.090738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:07.662 [2024-12-06 07:06:40.090749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:07.662 [2024-12-06 07:06:40.090760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:07.662 [2024-12-06 07:06:40.090771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:07.662 [2024-12-06 07:06:40.090798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:07.662 [2024-12-06 07:06:40.090809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:07.662 [2024-12-06 07:06:40.090867] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:07.662 [2024-12-06 07:06:40.090879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:07.662 [2024-12-06 07:06:40.090902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:07.662 [2024-12-06 07:06:40.090914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:07.662 [2024-12-06 07:06:40.090925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:07.662 [2024-12-06 07:06:40.090937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.090948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:07.662 [2024-12-06 07:06:40.090959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:40:07.662 [2024-12-06 07:06:40.090970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.118865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.118914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:07.662 [2024-12-06 07:06:40.118931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.807 ms 00:40:07.662 [2024-12-06 07:06:40.118951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.119042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.119055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:07.662 [2024-12-06 07:06:40.119066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:40:07.662 [2024-12-06 07:06:40.119074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.160147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.160207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:07.662 [2024-12-06 07:06:40.160224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.988 ms 00:40:07.662 [2024-12-06 07:06:40.160234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.160331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.160347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:07.662 [2024-12-06 07:06:40.160372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:07.662 [2024-12-06 07:06:40.160383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.161008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.161073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:07.662 [2024-12-06 07:06:40.161210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:40:07.662 [2024-12-06 07:06:40.161257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.161433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.161534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:07.662 [2024-12-06 07:06:40.161648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:40:07.662 [2024-12-06 07:06:40.161694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.175907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.175948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:07.662 [2024-12-06 07:06:40.175979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:40:07.662 [2024-12-06 07:06:40.175989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.189192] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:40:07.662 [2024-12-06 07:06:40.189231] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:07.662 [2024-12-06 07:06:40.189246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.189256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:07.662 [2024-12-06 07:06:40.189267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.136 ms 00:40:07.662 [2024-12-06 07:06:40.189276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.212255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.212308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:07.662 [2024-12-06 07:06:40.212323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.940 ms 00:40:07.662 [2024-12-06 07:06:40.212333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.224485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.224522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:07.662 [2024-12-06 07:06:40.224552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.113 ms 00:40:07.662 [2024-12-06 07:06:40.224562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.238450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.238507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:07.662 [2024-12-06 07:06:40.238540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.838 ms 00:40:07.662 [2024-12-06 07:06:40.238552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.662 [2024-12-06 07:06:40.239457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.662 [2024-12-06 07:06:40.239492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:07.662 [2024-12-06 07:06:40.239523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:40:07.662 [2024-12-06 07:06:40.239545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.928 [2024-12-06 07:06:40.299764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.928 [2024-12-06 07:06:40.300055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:07.929 [2024-12-06 07:06:40.300083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.192 ms 00:40:07.929 [2024-12-06 07:06:40.300112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.310342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:07.929 [2024-12-06 07:06:40.312213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.312242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:07.929 [2024-12-06 07:06:40.312255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.047 ms 00:40:07.929 [2024-12-06 07:06:40.312273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.312412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.312432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:07.929 [2024-12-06 07:06:40.312444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:07.929 [2024-12-06 07:06:40.312455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.312545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.312561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:07.929 [2024-12-06 07:06:40.312573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:40:07.929 [2024-12-06 07:06:40.312583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.312625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.312639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:07.929 [2024-12-06 07:06:40.312649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:07.929 [2024-12-06 07:06:40.312658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.312750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:07.929 [2024-12-06 07:06:40.312776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.312786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:07.929 [2024-12-06 07:06:40.312796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:40:07.929 [2024-12-06 07:06:40.312806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.337152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.337188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:07.929 [2024-12-06 07:06:40.337202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.320 ms 00:40:07.929 [2024-12-06 07:06:40.337224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.337291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:07.929 [2024-12-06 07:06:40.337306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:07.929 [2024-12-06 07:06:40.337317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:40:07.929 [2024-12-06 07:06:40.337326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:07.929 [2024-12-06 07:06:40.338772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.576 ms, result 0 00:40:08.865  [2024-12-06T07:06:42.392Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T07:06:43.770Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-06T07:06:44.709Z] Copying: 71/1024 [MB] (23 MBps) [2024-12-06T07:06:45.662Z] Copying: 95/1024 [MB] (24 MBps) [2024-12-06T07:06:46.599Z] Copying: 117/1024 [MB] (22 MBps) [2024-12-06T07:06:47.537Z] Copying: 141/1024 [MB] (24 MBps) [2024-12-06T07:06:48.475Z] Copying: 166/1024 [MB] (24 MBps) [2024-12-06T07:06:49.415Z] Copying: 190/1024 [MB] (24 MBps) [2024-12-06T07:06:50.353Z] Copying: 214/1024 [MB] (24 MBps) [2024-12-06T07:06:51.732Z] Copying: 238/1024 [MB] (24 MBps) [2024-12-06T07:06:52.672Z] Copying: 262/1024 [MB] (24 MBps) [2024-12-06T07:06:53.610Z] Copying: 286/1024 [MB] (23 MBps) [2024-12-06T07:06:54.548Z] Copying: 310/1024 [MB] (24 MBps) [2024-12-06T07:06:55.485Z] Copying: 334/1024 [MB] (24 MBps) [2024-12-06T07:06:56.419Z] Copying: 359/1024 [MB] (24 MBps) [2024-12-06T07:06:57.353Z] Copying: 383/1024 [MB] (24 MBps) [2024-12-06T07:06:58.730Z] Copying: 407/1024 [MB] (24 MBps) [2024-12-06T07:06:59.668Z] Copying: 431/1024 [MB] (24 MBps) [2024-12-06T07:07:00.606Z] Copying: 455/1024 [MB] (24 MBps) [2024-12-06T07:07:01.543Z] Copying: 480/1024 [MB] (24 MBps) [2024-12-06T07:07:02.482Z] Copying: 503/1024 [MB] (23 MBps) [2024-12-06T07:07:03.415Z] Copying: 528/1024 [MB] (24 MBps) [2024-12-06T07:07:04.788Z] Copying: 552/1024 [MB] (24 MBps) [2024-12-06T07:07:05.354Z] Copying: 577/1024 [MB] (24 MBps) [2024-12-06T07:07:06.731Z] Copying: 601/1024 [MB] (24 MBps) [2024-12-06T07:07:07.666Z] Copying: 626/1024 [MB] (24 MBps) [2024-12-06T07:07:08.609Z] Copying: 650/1024 [MB] (24 MBps) [2024-12-06T07:07:09.558Z] Copying: 674/1024 [MB] (24 MBps) [2024-12-06T07:07:10.497Z] Copying: 698/1024 [MB] (24 MBps) [2024-12-06T07:07:11.433Z] Copying: 722/1024 [MB] (24 MBps) [2024-12-06T07:07:12.367Z] Copying: 746/1024 [MB] (23 MBps) [2024-12-06T07:07:13.746Z] Copying: 770/1024 [MB] (23 MBps) [2024-12-06T07:07:14.683Z] Copying: 794/1024 [MB] (24 MBps) [2024-12-06T07:07:15.620Z] Copying: 819/1024 [MB] (24 MBps) [2024-12-06T07:07:16.556Z] Copying: 842/1024 [MB] (23 MBps) [2024-12-06T07:07:17.494Z] Copying: 866/1024 [MB] (23 MBps) [2024-12-06T07:07:18.429Z] Copying: 890/1024 [MB] (24 MBps) [2024-12-06T07:07:19.366Z] Copying: 914/1024 [MB] (24 MBps) [2024-12-06T07:07:20.759Z] Copying: 939/1024 [MB] (24 MBps) [2024-12-06T07:07:21.695Z] Copying: 962/1024 [MB] (23 MBps) [2024-12-06T07:07:22.628Z] Copying: 986/1024 [MB] (23 MBps) [2024-12-06T07:07:22.936Z] Copying: 1010/1024 [MB] (23 MBps) [2024-12-06T07:07:22.936Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 07:07:22.911299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.345 [2024-12-06 07:07:22.911389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:50.345 [2024-12-06 07:07:22.911406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:40:50.345 [2024-12-06 07:07:22.911417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.345 [2024-12-06 07:07:22.911441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:50.345 [2024-12-06 07:07:22.914360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.345 [2024-12-06 07:07:22.914390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:50.345 [2024-12-06 07:07:22.914409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:40:50.345 [2024-12-06 07:07:22.914419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.345 [2024-12-06 07:07:22.916008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.345 [2024-12-06 07:07:22.916229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:50.345 [2024-12-06 07:07:22.916253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:40:50.345 [2024-12-06 07:07:22.916265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.345 [2024-12-06 07:07:22.931108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.345 [2024-12-06 07:07:22.931147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:50.346 [2024-12-06 07:07:22.931178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.792 ms 00:40:50.346 [2024-12-06 07:07:22.931189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:22.936885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:22.936914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:50.605 [2024-12-06 07:07:22.936943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.652 ms 00:40:50.605 [2024-12-06 07:07:22.936953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:22.961655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:22.961690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:50.605 [2024-12-06 07:07:22.961754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.602 ms 00:40:50.605 [2024-12-06 07:07:22.961768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:22.976317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:22.976467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:50.605 [2024-12-06 07:07:22.976507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.510 ms 00:40:50.605 [2024-12-06 07:07:22.976520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:22.976654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:22.976676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:50.605 [2024-12-06 07:07:22.976688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:40:50.605 [2024-12-06 07:07:22.976698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:23.001416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:23.001452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:50.605 [2024-12-06 07:07:23.001466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.679 ms 00:40:50.605 [2024-12-06 07:07:23.001475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:23.025837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:23.025871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:50.605 [2024-12-06 07:07:23.025885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.326 ms 00:40:50.605 [2024-12-06 07:07:23.025894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:23.049656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:23.049691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:50.605 [2024-12-06 07:07:23.049738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.726 ms 00:40:50.605 [2024-12-06 07:07:23.049752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:23.073747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.605 [2024-12-06 07:07:23.073782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:50.605 [2024-12-06 07:07:23.073811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:40:50.605 [2024-12-06 07:07:23.073820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.605 [2024-12-06 07:07:23.073859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:50.605 [2024-12-06 07:07:23.073877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.073994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:50.605 [2024-12-06 07:07:23.074004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:50.606 [2024-12-06 07:07:23.074791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:50.607 [2024-12-06 07:07:23.074895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:50.607 [2024-12-06 07:07:23.074909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:40:50.607 [2024-12-06 07:07:23.074919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:50.607 [2024-12-06 07:07:23.074929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:50.607 [2024-12-06 07:07:23.074939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:50.607 [2024-12-06 07:07:23.074948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:50.607 [2024-12-06 07:07:23.074957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:50.607 [2024-12-06 07:07:23.074992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:50.607 [2024-12-06 07:07:23.075019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:50.607 [2024-12-06 07:07:23.075028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:50.607 [2024-12-06 07:07:23.075037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:50.607 [2024-12-06 07:07:23.075048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.607 [2024-12-06 07:07:23.075057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:50.607 [2024-12-06 07:07:23.075069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:40:50.607 [2024-12-06 07:07:23.075079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.088311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.607 [2024-12-06 07:07:23.088342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:50.607 [2024-12-06 07:07:23.088356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:40:50.607 [2024-12-06 07:07:23.088365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.088764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.607 [2024-12-06 07:07:23.088781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:50.607 [2024-12-06 07:07:23.088792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:40:50.607 [2024-12-06 07:07:23.088809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.121656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.607 [2024-12-06 07:07:23.121695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:50.607 [2024-12-06 07:07:23.121736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.607 [2024-12-06 07:07:23.121748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.121798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.607 [2024-12-06 07:07:23.121811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:50.607 [2024-12-06 07:07:23.121837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.607 [2024-12-06 07:07:23.121852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.121937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.607 [2024-12-06 07:07:23.121955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:50.607 [2024-12-06 07:07:23.121966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.607 [2024-12-06 07:07:23.121976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.607 [2024-12-06 07:07:23.121996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.607 [2024-12-06 07:07:23.122008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:50.607 [2024-12-06 07:07:23.122018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.607 [2024-12-06 07:07:23.122027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.201649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.201718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:50.865 [2024-12-06 07:07:23.201751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.201761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:50.865 [2024-12-06 07:07:23.266360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:50.865 [2024-12-06 07:07:23.266461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:50.865 [2024-12-06 07:07:23.266556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:50.865 [2024-12-06 07:07:23.266697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:50.865 [2024-12-06 07:07:23.266853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.266922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:50.865 [2024-12-06 07:07:23.266933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.266942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.266987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.865 [2024-12-06 07:07:23.267002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:50.865 [2024-12-06 07:07:23.267012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.865 [2024-12-06 07:07:23.267022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.865 [2024-12-06 07:07:23.267192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.828 ms, result 0 00:40:51.798 00:40:51.798 00:40:51.798 07:07:24 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:40:51.798 [2024-12-06 07:07:24.279138] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:40:51.798 [2024-12-06 07:07:24.279303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ] 00:40:52.056 [2024-12-06 07:07:24.447989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.056 [2024-12-06 07:07:24.527320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.316 [2024-12-06 07:07:24.786479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:52.316 [2024-12-06 07:07:24.786561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:52.576 [2024-12-06 07:07:24.942208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.942256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:52.576 [2024-12-06 07:07:24.942290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:52.576 [2024-12-06 07:07:24.942300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.942356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.942374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:52.576 [2024-12-06 07:07:24.942385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:40:52.576 [2024-12-06 07:07:24.942395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.942421] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:52.576 [2024-12-06 07:07:24.943436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:52.576 [2024-12-06 07:07:24.943675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.943826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:52.576 [2024-12-06 07:07:24.943944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:40:52.576 [2024-12-06 07:07:24.943965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.945100] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:52.576 [2024-12-06 07:07:24.958441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.958658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:52.576 [2024-12-06 07:07:24.958684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.343 ms 00:40:52.576 [2024-12-06 07:07:24.958696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.958798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.958818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:52.576 [2024-12-06 07:07:24.958831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:40:52.576 [2024-12-06 07:07:24.958841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.963127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.963162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:52.576 [2024-12-06 07:07:24.963191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:40:52.576 [2024-12-06 07:07:24.963206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.963284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.963300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:52.576 [2024-12-06 07:07:24.963311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:40:52.576 [2024-12-06 07:07:24.963320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.963366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.963382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:52.576 [2024-12-06 07:07:24.963392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:52.576 [2024-12-06 07:07:24.963401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.963435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:52.576 [2024-12-06 07:07:24.967042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.967074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:52.576 [2024-12-06 07:07:24.967106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:40:52.576 [2024-12-06 07:07:24.967115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.967150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.576 [2024-12-06 07:07:24.967164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:52.576 [2024-12-06 07:07:24.967175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:52.576 [2024-12-06 07:07:24.967184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.576 [2024-12-06 07:07:24.967208] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:52.576 [2024-12-06 07:07:24.967232] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:52.576 [2024-12-06 07:07:24.967268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:52.576 [2024-12-06 07:07:24.967287] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:52.576 [2024-12-06 07:07:24.967377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:52.576 [2024-12-06 07:07:24.967390] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:52.577 [2024-12-06 07:07:24.967402] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:52.577 [2024-12-06 07:07:24.967414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967425] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:52.577 [2024-12-06 07:07:24.967443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:52.577 [2024-12-06 07:07:24.967456] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:52.577 [2024-12-06 07:07:24.967465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:52.577 [2024-12-06 07:07:24.967475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.577 [2024-12-06 07:07:24.967484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:52.577 [2024-12-06 07:07:24.967494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:40:52.577 [2024-12-06 07:07:24.967503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.577 [2024-12-06 07:07:24.967578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.577 [2024-12-06 07:07:24.967591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:52.577 [2024-12-06 07:07:24.967601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:40:52.577 [2024-12-06 07:07:24.967610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.577 [2024-12-06 07:07:24.967762] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:52.577 [2024-12-06 07:07:24.967783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:52.577 [2024-12-06 07:07:24.967795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:52.577 [2024-12-06 07:07:24.967823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:52.577 [2024-12-06 07:07:24.967851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:52.577 [2024-12-06 07:07:24.967868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:52.577 [2024-12-06 07:07:24.967876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:52.577 [2024-12-06 07:07:24.967885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:52.577 [2024-12-06 07:07:24.967906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:52.577 [2024-12-06 07:07:24.967915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:52.577 [2024-12-06 07:07:24.967924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:52.577 [2024-12-06 07:07:24.967941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:52.577 [2024-12-06 07:07:24.967967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:52.577 [2024-12-06 07:07:24.967976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:52.577 [2024-12-06 07:07:24.967984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:52.577 [2024-12-06 07:07:24.967993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:52.577 [2024-12-06 07:07:24.968009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:52.577 [2024-12-06 07:07:24.968018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:52.577 [2024-12-06 07:07:24.968035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:52.577 [2024-12-06 07:07:24.968043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:52.577 [2024-12-06 07:07:24.968060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:52.577 [2024-12-06 07:07:24.968069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:52.577 [2024-12-06 07:07:24.968086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:52.577 [2024-12-06 07:07:24.968095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:52.577 [2024-12-06 07:07:24.968118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:52.577 [2024-12-06 07:07:24.968126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:52.577 [2024-12-06 07:07:24.968134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:52.577 [2024-12-06 07:07:24.968142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:52.577 [2024-12-06 07:07:24.968159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:52.577 [2024-12-06 07:07:24.968168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:52.577 [2024-12-06 07:07:24.968186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:52.577 [2024-12-06 07:07:24.968195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:52.577 [2024-12-06 07:07:24.968204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:52.577 [2024-12-06 07:07:24.968213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:52.577 [2024-12-06 07:07:24.968221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:52.577 [2024-12-06 07:07:24.968230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:52.577 [2024-12-06 07:07:24.968239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:52.577 [2024-12-06 07:07:24.968247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:52.577 [2024-12-06 07:07:24.968255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:52.577 [2024-12-06 07:07:24.968265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:52.577 [2024-12-06 07:07:24.968278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:52.577 [2024-12-06 07:07:24.968334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:52.577 [2024-12-06 07:07:24.968344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:52.577 [2024-12-06 07:07:24.968355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:52.577 [2024-12-06 07:07:24.968364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:52.577 [2024-12-06 07:07:24.968374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:52.577 [2024-12-06 07:07:24.968384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:52.577 [2024-12-06 07:07:24.968393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:52.577 [2024-12-06 07:07:24.968403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:52.577 [2024-12-06 07:07:24.968413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:52.577 [2024-12-06 07:07:24.968422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:52.577 [2024-12-06 07:07:24.968432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:52.577 [2024-12-06 07:07:24.968441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:52.577 [2024-12-06 07:07:24.968451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:52.578 [2024-12-06 07:07:24.968461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:52.578 [2024-12-06 07:07:24.968470] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:52.578 [2024-12-06 07:07:24.968482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:52.578 [2024-12-06 07:07:24.968494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:52.578 [2024-12-06 07:07:24.968504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:52.578 [2024-12-06 07:07:24.968514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:52.578 [2024-12-06 07:07:24.968523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:52.578 [2024-12-06 07:07:24.968534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:24.968544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:52.578 [2024-12-06 07:07:24.968555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:40:52.578 [2024-12-06 07:07:24.968565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:24.995948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:24.996163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:52.578 [2024-12-06 07:07:24.996295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.328 ms 00:40:52.578 [2024-12-06 07:07:24.996369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:24.996572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:24.996624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:52.578 [2024-12-06 07:07:24.996863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:40:52.578 [2024-12-06 07:07:24.996913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.046609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.046826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:52.578 [2024-12-06 07:07:25.046938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.576 ms 00:40:52.578 [2024-12-06 07:07:25.046983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.047120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.047172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:52.578 [2024-12-06 07:07:25.047215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:52.578 [2024-12-06 07:07:25.047334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.047747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.047875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:52.578 [2024-12-06 07:07:25.047972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:40:52.578 [2024-12-06 07:07:25.048074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.048253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.048338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:52.578 [2024-12-06 07:07:25.048448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:40:52.578 [2024-12-06 07:07:25.048468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.062808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.062987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:52.578 [2024-12-06 07:07:25.063108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.307 ms 00:40:52.578 [2024-12-06 07:07:25.063129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.076506] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:52.578 [2024-12-06 07:07:25.076757] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:52.578 [2024-12-06 07:07:25.076782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.076794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:52.578 [2024-12-06 07:07:25.076806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.535 ms 00:40:52.578 [2024-12-06 07:07:25.076816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.100782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.100832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:52.578 [2024-12-06 07:07:25.100862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.922 ms 00:40:52.578 [2024-12-06 07:07:25.100872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.113746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.113781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:52.578 [2024-12-06 07:07:25.113809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.835 ms 00:40:52.578 [2024-12-06 07:07:25.113819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.127088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.127269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:52.578 [2024-12-06 07:07:25.127293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.232 ms 00:40:52.578 [2024-12-06 07:07:25.127305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.578 [2024-12-06 07:07:25.128123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.578 [2024-12-06 07:07:25.128157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:52.578 [2024-12-06 07:07:25.128192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:40:52.578 [2024-12-06 07:07:25.128203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.187013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.187072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:52.837 [2024-12-06 07:07:25.187095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.780 ms 00:40:52.837 [2024-12-06 07:07:25.187104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.196885] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:52.837 [2024-12-06 07:07:25.198834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.198862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:52.837 [2024-12-06 07:07:25.198876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.680 ms 00:40:52.837 [2024-12-06 07:07:25.198884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.198989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.199007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:52.837 [2024-12-06 07:07:25.199022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:52.837 [2024-12-06 07:07:25.199030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.199106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.199123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:52.837 [2024-12-06 07:07:25.199133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:40:52.837 [2024-12-06 07:07:25.199142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.199165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.199176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:52.837 [2024-12-06 07:07:25.199186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:52.837 [2024-12-06 07:07:25.199195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.199233] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:52.837 [2024-12-06 07:07:25.199248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.199257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:52.837 [2024-12-06 07:07:25.199266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:40:52.837 [2024-12-06 07:07:25.199275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.223453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.223489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:52.837 [2024-12-06 07:07:25.223509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.158 ms 00:40:52.837 [2024-12-06 07:07:25.223519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.223584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.837 [2024-12-06 07:07:25.223599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:52.837 [2024-12-06 07:07:25.223609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:40:52.837 [2024-12-06 07:07:25.223617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.837 [2024-12-06 07:07:25.224906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.090 ms, result 0 00:40:54.212  [2024-12-06T07:07:27.739Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T07:07:28.678Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-06T07:07:29.617Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-06T07:07:30.556Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-06T07:07:31.492Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-06T07:07:32.430Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-06T07:07:33.809Z] Copying: 159/1024 [MB] (22 MBps) [2024-12-06T07:07:34.748Z] Copying: 181/1024 [MB] (22 MBps) [2024-12-06T07:07:35.684Z] Copying: 204/1024 [MB] (22 MBps) [2024-12-06T07:07:36.628Z] Copying: 227/1024 [MB] (22 MBps) [2024-12-06T07:07:37.591Z] Copying: 250/1024 [MB] (23 MBps) [2024-12-06T07:07:38.530Z] Copying: 273/1024 [MB] (22 MBps) [2024-12-06T07:07:39.470Z] Copying: 296/1024 [MB] (22 MBps) [2024-12-06T07:07:40.409Z] Copying: 319/1024 [MB] (23 MBps) [2024-12-06T07:07:41.785Z] Copying: 342/1024 [MB] (22 MBps) [2024-12-06T07:07:42.724Z] Copying: 364/1024 [MB] (22 MBps) [2024-12-06T07:07:43.663Z] Copying: 386/1024 [MB] (22 MBps) [2024-12-06T07:07:44.602Z] Copying: 409/1024 [MB] (22 MBps) [2024-12-06T07:07:45.538Z] Copying: 432/1024 [MB] (22 MBps) [2024-12-06T07:07:46.474Z] Copying: 455/1024 [MB] (22 MBps) [2024-12-06T07:07:47.411Z] Copying: 477/1024 [MB] (22 MBps) [2024-12-06T07:07:48.788Z] Copying: 500/1024 [MB] (23 MBps) [2024-12-06T07:07:49.408Z] Copying: 523/1024 [MB] (23 MBps) [2024-12-06T07:07:50.783Z] Copying: 546/1024 [MB] (22 MBps) [2024-12-06T07:07:51.720Z] Copying: 569/1024 [MB] (22 MBps) [2024-12-06T07:07:52.658Z] Copying: 592/1024 [MB] (23 MBps) [2024-12-06T07:07:53.597Z] Copying: 615/1024 [MB] (22 MBps) [2024-12-06T07:07:54.535Z] Copying: 638/1024 [MB] (22 MBps) [2024-12-06T07:07:55.472Z] Copying: 660/1024 [MB] (22 MBps) [2024-12-06T07:07:56.407Z] Copying: 683/1024 [MB] (22 MBps) [2024-12-06T07:07:57.782Z] Copying: 706/1024 [MB] (22 MBps) [2024-12-06T07:07:58.720Z] Copying: 729/1024 [MB] (23 MBps) [2024-12-06T07:07:59.659Z] Copying: 752/1024 [MB] (23 MBps) [2024-12-06T07:08:00.597Z] Copying: 775/1024 [MB] (22 MBps) [2024-12-06T07:08:01.538Z] Copying: 798/1024 [MB] (22 MBps) [2024-12-06T07:08:02.477Z] Copying: 820/1024 [MB] (22 MBps) [2024-12-06T07:08:03.418Z] Copying: 843/1024 [MB] (22 MBps) [2024-12-06T07:08:04.795Z] Copying: 866/1024 [MB] (23 MBps) [2024-12-06T07:08:05.766Z] Copying: 889/1024 [MB] (23 MBps) [2024-12-06T07:08:06.721Z] Copying: 912/1024 [MB] (22 MBps) [2024-12-06T07:08:07.657Z] Copying: 935/1024 [MB] (23 MBps) [2024-12-06T07:08:08.595Z] Copying: 958/1024 [MB] (22 MBps) [2024-12-06T07:08:09.533Z] Copying: 980/1024 [MB] (22 MBps) [2024-12-06T07:08:10.471Z] Copying: 1002/1024 [MB] (22 MBps) [2024-12-06T07:08:10.471Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 07:08:10.392649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.880 [2024-12-06 07:08:10.392778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:37.880 [2024-12-06 07:08:10.392810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:37.880 [2024-12-06 07:08:10.392828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.880 [2024-12-06 07:08:10.392862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:37.880 [2024-12-06 07:08:10.396550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.880 [2024-12-06 07:08:10.396790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:37.880 [2024-12-06 07:08:10.396906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:41:37.880 [2024-12-06 07:08:10.396951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.880 [2024-12-06 07:08:10.397275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.880 [2024-12-06 07:08:10.397443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:37.880 [2024-12-06 07:08:10.397557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:41:37.880 [2024-12-06 07:08:10.397674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.880 [2024-12-06 07:08:10.400842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.880 [2024-12-06 07:08:10.400989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:37.880 [2024-12-06 07:08:10.401097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.091 ms 00:41:37.881 [2024-12-06 07:08:10.401233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.881 [2024-12-06 07:08:10.406858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.881 [2024-12-06 07:08:10.407015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:37.881 [2024-12-06 07:08:10.407141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.563 ms 00:41:37.881 [2024-12-06 07:08:10.407187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.881 [2024-12-06 07:08:10.433469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.881 [2024-12-06 07:08:10.433681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:37.881 [2024-12-06 07:08:10.433823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.101 ms 00:41:37.881 [2024-12-06 07:08:10.433872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.881 [2024-12-06 07:08:10.448958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.881 [2024-12-06 07:08:10.449139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:37.881 [2024-12-06 07:08:10.449273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.899 ms 00:41:37.881 [2024-12-06 07:08:10.449321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:37.881 [2024-12-06 07:08:10.449480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:37.881 [2024-12-06 07:08:10.449531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:37.881 [2024-12-06 07:08:10.449568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:41:37.881 [2024-12-06 07:08:10.449655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.141 [2024-12-06 07:08:10.477355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.141 [2024-12-06 07:08:10.477551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:38.141 [2024-12-06 07:08:10.477668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.580 ms 00:41:38.141 [2024-12-06 07:08:10.477690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.141 [2024-12-06 07:08:10.503785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.141 [2024-12-06 07:08:10.503821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:38.141 [2024-12-06 07:08:10.503852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.028 ms 00:41:38.141 [2024-12-06 07:08:10.503861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.141 [2024-12-06 07:08:10.528784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.141 [2024-12-06 07:08:10.528822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:38.141 [2024-12-06 07:08:10.528837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.885 ms 00:41:38.141 [2024-12-06 07:08:10.528846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.141 [2024-12-06 07:08:10.553031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.141 [2024-12-06 07:08:10.553071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:38.141 [2024-12-06 07:08:10.553102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 00:41:38.141 [2024-12-06 07:08:10.553111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.141 [2024-12-06 07:08:10.553150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:38.141 [2024-12-06 07:08:10.553177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:38.141 [2024-12-06 07:08:10.553257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.553990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:38.142 [2024-12-06 07:08:10.554212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:38.143 [2024-12-06 07:08:10.554222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:38.143 [2024-12-06 07:08:10.554232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:38.143 [2024-12-06 07:08:10.554242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:38.143 [2024-12-06 07:08:10.554252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:38.143 [2024-12-06 07:08:10.554270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:38.143 [2024-12-06 07:08:10.554280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:41:38.143 [2024-12-06 07:08:10.554291] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:38.143 [2024-12-06 07:08:10.554301] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:38.143 [2024-12-06 07:08:10.554311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:38.143 [2024-12-06 07:08:10.554321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:38.143 [2024-12-06 07:08:10.554343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:38.143 [2024-12-06 07:08:10.554353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:38.143 [2024-12-06 07:08:10.554363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:38.143 [2024-12-06 07:08:10.554372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:38.143 [2024-12-06 07:08:10.554381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:38.143 [2024-12-06 07:08:10.554391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.143 [2024-12-06 07:08:10.554401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:38.143 [2024-12-06 07:08:10.554411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:41:38.143 [2024-12-06 07:08:10.554426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.567409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.143 [2024-12-06 07:08:10.567443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:38.143 [2024-12-06 07:08:10.567473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:41:38.143 [2024-12-06 07:08:10.567497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.567907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:38.143 [2024-12-06 07:08:10.567935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:38.143 [2024-12-06 07:08:10.567955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:41:38.143 [2024-12-06 07:08:10.567965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.601117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.143 [2024-12-06 07:08:10.601157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:38.143 [2024-12-06 07:08:10.601171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.143 [2024-12-06 07:08:10.601180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.601231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.143 [2024-12-06 07:08:10.601245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:38.143 [2024-12-06 07:08:10.601266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.143 [2024-12-06 07:08:10.601276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.601342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.143 [2024-12-06 07:08:10.601359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:38.143 [2024-12-06 07:08:10.601369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.143 [2024-12-06 07:08:10.601377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.601395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.143 [2024-12-06 07:08:10.601407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:38.143 [2024-12-06 07:08:10.601416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.143 [2024-12-06 07:08:10.601439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.143 [2024-12-06 07:08:10.680426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.143 [2024-12-06 07:08:10.680648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:38.143 [2024-12-06 07:08:10.680675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.143 [2024-12-06 07:08:10.680688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.746965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:38.403 [2024-12-06 07:08:10.747042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:38.403 [2024-12-06 07:08:10.747177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:38.403 [2024-12-06 07:08:10.747248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:38.403 [2024-12-06 07:08:10.747399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:38.403 [2024-12-06 07:08:10.747475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:38.403 [2024-12-06 07:08:10.747576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:38.403 [2024-12-06 07:08:10.747646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:38.403 [2024-12-06 07:08:10.747656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:38.403 [2024-12-06 07:08:10.747665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:38.403 [2024-12-06 07:08:10.747874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.190 ms, result 0 00:41:38.973 00:41:38.973 00:41:38.973 07:08:11 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:40.880 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:41:40.880 07:08:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:41:40.880 [2024-12-06 07:08:13.321638] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:41:40.880 [2024-12-06 07:08:13.321791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79891 ] 00:41:41.140 [2024-12-06 07:08:13.491988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.140 [2024-12-06 07:08:13.615989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.400 [2024-12-06 07:08:13.878992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:41.400 [2024-12-06 07:08:13.879358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:41.661 [2024-12-06 07:08:14.035003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.035088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:41.661 [2024-12-06 07:08:14.035107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:41.661 [2024-12-06 07:08:14.035133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.035193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.035213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:41.661 [2024-12-06 07:08:14.035225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:41:41.661 [2024-12-06 07:08:14.035234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.035261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:41.661 [2024-12-06 07:08:14.036162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:41.661 [2024-12-06 07:08:14.036206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.036221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:41.661 [2024-12-06 07:08:14.036234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:41:41.661 [2024-12-06 07:08:14.036244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.037572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:41.661 [2024-12-06 07:08:14.052218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.052259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:41.661 [2024-12-06 07:08:14.052291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.647 ms 00:41:41.661 [2024-12-06 07:08:14.052301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.052399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.052418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:41.661 [2024-12-06 07:08:14.052430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:41:41.661 [2024-12-06 07:08:14.052440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.057160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.057195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:41.661 [2024-12-06 07:08:14.057226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.632 ms 00:41:41.661 [2024-12-06 07:08:14.057241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.057319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.057336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:41.661 [2024-12-06 07:08:14.057359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:41:41.661 [2024-12-06 07:08:14.057368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.057414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.057430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:41.661 [2024-12-06 07:08:14.057440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:41.661 [2024-12-06 07:08:14.057449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.057482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:41.661 [2024-12-06 07:08:14.061326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.061362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:41.661 [2024-12-06 07:08:14.061397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.851 ms 00:41:41.661 [2024-12-06 07:08:14.061406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.061449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.661 [2024-12-06 07:08:14.061463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:41.661 [2024-12-06 07:08:14.061473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:41.661 [2024-12-06 07:08:14.061482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.661 [2024-12-06 07:08:14.061523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:41.661 [2024-12-06 07:08:14.061552] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:41.661 [2024-12-06 07:08:14.061588] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:41.661 [2024-12-06 07:08:14.061609] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:41.661 [2024-12-06 07:08:14.061699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:41.661 [2024-12-06 07:08:14.061712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:41.661 [2024-12-06 07:08:14.061750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:41.661 [2024-12-06 07:08:14.061773] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:41.662 [2024-12-06 07:08:14.061785] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:41.662 [2024-12-06 07:08:14.061795] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:41.662 [2024-12-06 07:08:14.061805] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:41.662 [2024-12-06 07:08:14.061819] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:41.662 [2024-12-06 07:08:14.061829] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:41.662 [2024-12-06 07:08:14.061839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.662 [2024-12-06 07:08:14.061848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:41.662 [2024-12-06 07:08:14.061858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:41:41.662 [2024-12-06 07:08:14.061877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.662 [2024-12-06 07:08:14.061955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.662 [2024-12-06 07:08:14.061968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:41.662 [2024-12-06 07:08:14.061978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:41:41.662 [2024-12-06 07:08:14.061986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.662 [2024-12-06 07:08:14.062089] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:41.662 [2024-12-06 07:08:14.062107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:41.662 [2024-12-06 07:08:14.062118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:41.662 [2024-12-06 07:08:14.062145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:41.662 [2024-12-06 07:08:14.062173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:41.662 [2024-12-06 07:08:14.062190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:41.662 [2024-12-06 07:08:14.062198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:41.662 [2024-12-06 07:08:14.062206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:41.662 [2024-12-06 07:08:14.062225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:41.662 [2024-12-06 07:08:14.062235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:41.662 [2024-12-06 07:08:14.062245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:41.662 [2024-12-06 07:08:14.062263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:41.662 [2024-12-06 07:08:14.062288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:41.662 [2024-12-06 07:08:14.062313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:41.662 [2024-12-06 07:08:14.062338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:41.662 [2024-12-06 07:08:14.062363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:41.662 [2024-12-06 07:08:14.062388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:41.662 [2024-12-06 07:08:14.062405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:41.662 [2024-12-06 07:08:14.062413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:41.662 [2024-12-06 07:08:14.062421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:41.662 [2024-12-06 07:08:14.062430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:41.662 [2024-12-06 07:08:14.062438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:41.662 [2024-12-06 07:08:14.062447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:41.662 [2024-12-06 07:08:14.062464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:41.662 [2024-12-06 07:08:14.062473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062481] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:41.662 [2024-12-06 07:08:14.062491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:41.662 [2024-12-06 07:08:14.062500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:41.662 [2024-12-06 07:08:14.062519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:41.662 [2024-12-06 07:08:14.062528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:41.662 [2024-12-06 07:08:14.062536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:41.662 [2024-12-06 07:08:14.062545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:41.662 [2024-12-06 07:08:14.062553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:41.662 [2024-12-06 07:08:14.062562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:41.662 [2024-12-06 07:08:14.062572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:41.662 [2024-12-06 07:08:14.062583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:41.662 [2024-12-06 07:08:14.062601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:41.662 [2024-12-06 07:08:14.062610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:41.662 [2024-12-06 07:08:14.062620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:41.662 [2024-12-06 07:08:14.062629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:41.662 [2024-12-06 07:08:14.062638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:41.662 [2024-12-06 07:08:14.062647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:41.662 [2024-12-06 07:08:14.062656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:41.662 [2024-12-06 07:08:14.062665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:41.663 [2024-12-06 07:08:14.062674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:41.663 [2024-12-06 07:08:14.062683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.062692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.062702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.063100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.063155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:41.663 [2024-12-06 07:08:14.063286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:41.663 [2024-12-06 07:08:14.063350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.063401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:41.663 [2024-12-06 07:08:14.063537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:41.663 [2024-12-06 07:08:14.063653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:41.663 [2024-12-06 07:08:14.063728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:41.663 [2024-12-06 07:08:14.063884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.063919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:41.663 [2024-12-06 07:08:14.063953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:41:41.663 [2024-12-06 07:08:14.063986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.091989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.092229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:41.663 [2024-12-06 07:08:14.092377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.892 ms 00:41:41.663 [2024-12-06 07:08:14.092435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.092700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.092746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:41.663 [2024-12-06 07:08:14.092763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:41:41.663 [2024-12-06 07:08:14.092773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.139742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.139964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:41.663 [2024-12-06 07:08:14.139992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.878 ms 00:41:41.663 [2024-12-06 07:08:14.140006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.140069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.140085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:41.663 [2024-12-06 07:08:14.140103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:41.663 [2024-12-06 07:08:14.140113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.140564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.140583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:41.663 [2024-12-06 07:08:14.140595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:41:41.663 [2024-12-06 07:08:14.140619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.140792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.140812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:41.663 [2024-12-06 07:08:14.140829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:41:41.663 [2024-12-06 07:08:14.140839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.154505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.154547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:41.663 [2024-12-06 07:08:14.154579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.641 ms 00:41:41.663 [2024-12-06 07:08:14.154589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.168217] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:41:41.663 [2024-12-06 07:08:14.168256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:41.663 [2024-12-06 07:08:14.168290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.168300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:41.663 [2024-12-06 07:08:14.168318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.583 ms 00:41:41.663 [2024-12-06 07:08:14.168345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.192211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.192249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:41.663 [2024-12-06 07:08:14.192281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.823 ms 00:41:41.663 [2024-12-06 07:08:14.192291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.205594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.205633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:41.663 [2024-12-06 07:08:14.205647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.225 ms 00:41:41.663 [2024-12-06 07:08:14.205656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.217952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.218139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:41.663 [2024-12-06 07:08:14.218165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.257 ms 00:41:41.663 [2024-12-06 07:08:14.218176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.663 [2024-12-06 07:08:14.219061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.663 [2024-12-06 07:08:14.219103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:41.663 [2024-12-06 07:08:14.219140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:41:41.663 [2024-12-06 07:08:14.219151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.923 [2024-12-06 07:08:14.278296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.923 [2024-12-06 07:08:14.278362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:41.923 [2024-12-06 07:08:14.278386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.119 ms 00:41:41.923 [2024-12-06 07:08:14.278396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.923 [2024-12-06 07:08:14.288381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:41.923 [2024-12-06 07:08:14.290349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.923 [2024-12-06 07:08:14.290380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:41.923 [2024-12-06 07:08:14.290394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.897 ms 00:41:41.923 [2024-12-06 07:08:14.290404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.923 [2024-12-06 07:08:14.290511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.923 [2024-12-06 07:08:14.290528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:41.923 [2024-12-06 07:08:14.290542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:41.923 [2024-12-06 07:08:14.290551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.923 [2024-12-06 07:08:14.290633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.923 [2024-12-06 07:08:14.290649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:41.923 [2024-12-06 07:08:14.290659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:41:41.923 [2024-12-06 07:08:14.290668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.924 [2024-12-06 07:08:14.290690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.924 [2024-12-06 07:08:14.290703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:41.924 [2024-12-06 07:08:14.290745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:41.924 [2024-12-06 07:08:14.290778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.924 [2024-12-06 07:08:14.290832] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:41.924 [2024-12-06 07:08:14.290848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.924 [2024-12-06 07:08:14.290858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:41.924 [2024-12-06 07:08:14.290868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:41.924 [2024-12-06 07:08:14.290877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.924 [2024-12-06 07:08:14.315226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.924 [2024-12-06 07:08:14.315265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:41.924 [2024-12-06 07:08:14.315286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.325 ms 00:41:41.924 [2024-12-06 07:08:14.315295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.924 [2024-12-06 07:08:14.315363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:41.924 [2024-12-06 07:08:14.315379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:41.924 [2024-12-06 07:08:14.315389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:41:41.924 [2024-12-06 07:08:14.315398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:41.924 [2024-12-06 07:08:14.316953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.376 ms, result 0 00:41:42.861  [2024-12-06T07:08:16.386Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T07:08:17.763Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-06T07:08:18.780Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-06T07:08:19.357Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-06T07:08:20.730Z] Copying: 116/1024 [MB] (23 MBps) [2024-12-06T07:08:21.665Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-06T07:08:22.601Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-06T07:08:23.538Z] Copying: 186/1024 [MB] (23 MBps) [2024-12-06T07:08:24.477Z] Copying: 209/1024 [MB] (23 MBps) [2024-12-06T07:08:25.415Z] Copying: 232/1024 [MB] (23 MBps) [2024-12-06T07:08:26.352Z] Copying: 256/1024 [MB] (23 MBps) [2024-12-06T07:08:27.733Z] Copying: 279/1024 [MB] (23 MBps) [2024-12-06T07:08:28.671Z] Copying: 302/1024 [MB] (23 MBps) [2024-12-06T07:08:29.611Z] Copying: 326/1024 [MB] (23 MBps) [2024-12-06T07:08:30.548Z] Copying: 349/1024 [MB] (23 MBps) [2024-12-06T07:08:31.486Z] Copying: 373/1024 [MB] (23 MBps) [2024-12-06T07:08:32.425Z] Copying: 396/1024 [MB] (23 MBps) [2024-12-06T07:08:33.363Z] Copying: 420/1024 [MB] (23 MBps) [2024-12-06T07:08:34.745Z] Copying: 443/1024 [MB] (23 MBps) [2024-12-06T07:08:35.679Z] Copying: 467/1024 [MB] (23 MBps) [2024-12-06T07:08:36.614Z] Copying: 490/1024 [MB] (23 MBps) [2024-12-06T07:08:37.546Z] Copying: 513/1024 [MB] (23 MBps) [2024-12-06T07:08:38.481Z] Copying: 537/1024 [MB] (23 MBps) [2024-12-06T07:08:39.419Z] Copying: 560/1024 [MB] (23 MBps) [2024-12-06T07:08:40.356Z] Copying: 584/1024 [MB] (23 MBps) [2024-12-06T07:08:41.735Z] Copying: 607/1024 [MB] (23 MBps) [2024-12-06T07:08:42.673Z] Copying: 631/1024 [MB] (23 MBps) [2024-12-06T07:08:43.611Z] Copying: 654/1024 [MB] (23 MBps) [2024-12-06T07:08:44.548Z] Copying: 678/1024 [MB] (23 MBps) [2024-12-06T07:08:45.485Z] Copying: 702/1024 [MB] (23 MBps) [2024-12-06T07:08:46.420Z] Copying: 726/1024 [MB] (24 MBps) [2024-12-06T07:08:47.356Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-06T07:08:48.732Z] Copying: 773/1024 [MB] (23 MBps) [2024-12-06T07:08:49.667Z] Copying: 797/1024 [MB] (23 MBps) [2024-12-06T07:08:50.604Z] Copying: 821/1024 [MB] (23 MBps) [2024-12-06T07:08:51.540Z] Copying: 845/1024 [MB] (24 MBps) [2024-12-06T07:08:52.477Z] Copying: 868/1024 [MB] (23 MBps) [2024-12-06T07:08:53.415Z] Copying: 892/1024 [MB] (23 MBps) [2024-12-06T07:08:54.353Z] Copying: 916/1024 [MB] (23 MBps) [2024-12-06T07:08:55.748Z] Copying: 939/1024 [MB] (23 MBps) [2024-12-06T07:08:56.683Z] Copying: 962/1024 [MB] (23 MBps) [2024-12-06T07:08:57.620Z] Copying: 985/1024 [MB] (22 MBps) [2024-12-06T07:08:58.557Z] Copying: 1008/1024 [MB] (22 MBps) [2024-12-06T07:08:59.126Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-06T07:08:59.126Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 07:08:59.001498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.001841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:26.535 [2024-12-06 07:08:59.001986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:26.535 [2024-12-06 07:08:59.002040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.006291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:26.535 [2024-12-06 07:08:59.010627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.010842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:26.535 [2024-12-06 07:08:59.010968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:42:26.535 [2024-12-06 07:08:59.011014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.022676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.022879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:26.535 [2024-12-06 07:08:59.022906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.436 ms 00:42:26.535 [2024-12-06 07:08:59.022927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.043025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.043064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:26.535 [2024-12-06 07:08:59.043094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.073 ms 00:42:26.535 [2024-12-06 07:08:59.043104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.048420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.048452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:26.535 [2024-12-06 07:08:59.048465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.281 ms 00:42:26.535 [2024-12-06 07:08:59.048482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.073381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.073556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:26.535 [2024-12-06 07:08:59.073582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:42:26.535 [2024-12-06 07:08:59.073593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.535 [2024-12-06 07:08:59.088457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.535 [2024-12-06 07:08:59.088644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:26.535 [2024-12-06 07:08:59.088684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.790 ms 00:42:26.535 [2024-12-06 07:08:59.088695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.200380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.796 [2024-12-06 07:08:59.200428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:26.796 [2024-12-06 07:08:59.200478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.612 ms 00:42:26.796 [2024-12-06 07:08:59.200489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.225475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.796 [2024-12-06 07:08:59.225639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:26.796 [2024-12-06 07:08:59.225798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:42:26.796 [2024-12-06 07:08:59.225936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.250310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.796 [2024-12-06 07:08:59.250346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:26.796 [2024-12-06 07:08:59.250361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.321 ms 00:42:26.796 [2024-12-06 07:08:59.250370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.274424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.796 [2024-12-06 07:08:59.274461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:26.796 [2024-12-06 07:08:59.274475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.017 ms 00:42:26.796 [2024-12-06 07:08:59.274485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.298531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.796 [2024-12-06 07:08:59.298568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:26.796 [2024-12-06 07:08:59.298582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.987 ms 00:42:26.796 [2024-12-06 07:08:59.298591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.796 [2024-12-06 07:08:59.298627] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:26.796 [2024-12-06 07:08:59.298647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116224 / 261120 wr_cnt: 1 state: open 00:42:26.796 [2024-12-06 07:08:59.298658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.298992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:26.796 [2024-12-06 07:08:59.299201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:26.797 [2024-12-06 07:08:59.299815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:26.797 [2024-12-06 07:08:59.299825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:42:26.797 [2024-12-06 07:08:59.299836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116224 00:42:26.797 [2024-12-06 07:08:59.299846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117184 00:42:26.797 [2024-12-06 07:08:59.299855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116224 00:42:26.797 [2024-12-06 07:08:59.299866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:42:26.797 [2024-12-06 07:08:59.299890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:26.797 [2024-12-06 07:08:59.299901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:26.797 [2024-12-06 07:08:59.299910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:26.797 [2024-12-06 07:08:59.299919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:26.797 [2024-12-06 07:08:59.299928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:26.797 [2024-12-06 07:08:59.299939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.797 [2024-12-06 07:08:59.299949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:26.797 [2024-12-06 07:08:59.299960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:42:26.797 [2024-12-06 07:08:59.299971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.797 [2024-12-06 07:08:59.313748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.797 [2024-12-06 07:08:59.313783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:26.798 [2024-12-06 07:08:59.313803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.738 ms 00:42:26.798 [2024-12-06 07:08:59.313812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.798 [2024-12-06 07:08:59.314160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:26.798 [2024-12-06 07:08:59.314175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:26.798 [2024-12-06 07:08:59.314185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:42:26.798 [2024-12-06 07:08:59.314193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.798 [2024-12-06 07:08:59.347288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:26.798 [2024-12-06 07:08:59.347464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:26.798 [2024-12-06 07:08:59.347489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:26.798 [2024-12-06 07:08:59.347499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.798 [2024-12-06 07:08:59.347555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:26.798 [2024-12-06 07:08:59.347570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:26.798 [2024-12-06 07:08:59.347580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:26.798 [2024-12-06 07:08:59.347590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.798 [2024-12-06 07:08:59.347671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:26.798 [2024-12-06 07:08:59.347696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:26.798 [2024-12-06 07:08:59.347707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:26.798 [2024-12-06 07:08:59.347733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:26.798 [2024-12-06 07:08:59.347792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:26.798 [2024-12-06 07:08:59.347826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:26.798 [2024-12-06 07:08:59.347838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:26.798 [2024-12-06 07:08:59.347848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.057 [2024-12-06 07:08:59.427837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.057 [2024-12-06 07:08:59.428108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:27.058 [2024-12-06 07:08:59.428135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.428146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:27.058 [2024-12-06 07:08:59.493187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:27.058 [2024-12-06 07:08:59.493298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:27.058 [2024-12-06 07:08:59.493376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:27.058 [2024-12-06 07:08:59.493508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:27.058 [2024-12-06 07:08:59.493590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:27.058 [2024-12-06 07:08:59.493659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.058 [2024-12-06 07:08:59.493808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:27.058 [2024-12-06 07:08:59.493824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.058 [2024-12-06 07:08:59.493834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.058 [2024-12-06 07:08:59.493998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.071 ms, result 0 00:42:28.437 00:42:28.437 00:42:28.437 07:09:00 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:42:28.437 [2024-12-06 07:09:00.860627] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:42:28.437 [2024-12-06 07:09:00.861069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80352 ] 00:42:28.696 [2024-12-06 07:09:01.039835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.696 [2024-12-06 07:09:01.127587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.956 [2024-12-06 07:09:01.412144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:28.956 [2024-12-06 07:09:01.412233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:29.226 [2024-12-06 07:09:01.569830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.226 [2024-12-06 07:09:01.569882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:29.226 [2024-12-06 07:09:01.569916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:29.226 [2024-12-06 07:09:01.569927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.226 [2024-12-06 07:09:01.569987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.226 [2024-12-06 07:09:01.570007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:29.226 [2024-12-06 07:09:01.570019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:42:29.226 [2024-12-06 07:09:01.570029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.226 [2024-12-06 07:09:01.570059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:29.226 [2024-12-06 07:09:01.570884] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:29.226 [2024-12-06 07:09:01.570920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.226 [2024-12-06 07:09:01.570933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:29.226 [2024-12-06 07:09:01.570945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:42:29.226 [2024-12-06 07:09:01.570955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.226 [2024-12-06 07:09:01.572253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:29.226 [2024-12-06 07:09:01.585845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.226 [2024-12-06 07:09:01.585884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:29.226 [2024-12-06 07:09:01.585916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.593 ms 00:42:29.226 [2024-12-06 07:09:01.585927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.226 [2024-12-06 07:09:01.586004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.226 [2024-12-06 07:09:01.586023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:29.226 [2024-12-06 07:09:01.586034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:42:29.227 [2024-12-06 07:09:01.586043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.590572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.590608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:29.227 [2024-12-06 07:09:01.590639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.444 ms 00:42:29.227 [2024-12-06 07:09:01.590654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.590756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.590798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:29.227 [2024-12-06 07:09:01.590816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:42:29.227 [2024-12-06 07:09:01.590827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.590880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.590896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:29.227 [2024-12-06 07:09:01.590907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:42:29.227 [2024-12-06 07:09:01.590916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.590954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:29.227 [2024-12-06 07:09:01.594670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.594733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:29.227 [2024-12-06 07:09:01.594763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:42:29.227 [2024-12-06 07:09:01.594775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.594813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.594829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:29.227 [2024-12-06 07:09:01.594840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:29.227 [2024-12-06 07:09:01.594864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.594907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:29.227 [2024-12-06 07:09:01.594937] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:29.227 [2024-12-06 07:09:01.594975] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:29.227 [2024-12-06 07:09:01.594996] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:29.227 [2024-12-06 07:09:01.595090] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:29.227 [2024-12-06 07:09:01.595103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:29.227 [2024-12-06 07:09:01.595116] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:29.227 [2024-12-06 07:09:01.595128] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595150] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:29.227 [2024-12-06 07:09:01.595159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:29.227 [2024-12-06 07:09:01.595172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:29.227 [2024-12-06 07:09:01.595181] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:29.227 [2024-12-06 07:09:01.595206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.595215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:29.227 [2024-12-06 07:09:01.595225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:42:29.227 [2024-12-06 07:09:01.595234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.595318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-12-06 07:09:01.595334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:29.227 [2024-12-06 07:09:01.595344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:42:29.227 [2024-12-06 07:09:01.595353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-12-06 07:09:01.595455] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:29.227 [2024-12-06 07:09:01.595474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:29.227 [2024-12-06 07:09:01.595485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:29.227 [2024-12-06 07:09:01.595512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:29.227 [2024-12-06 07:09:01.595540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:29.227 [2024-12-06 07:09:01.595557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:29.227 [2024-12-06 07:09:01.595565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:29.227 [2024-12-06 07:09:01.595573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:29.227 [2024-12-06 07:09:01.595593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:29.227 [2024-12-06 07:09:01.595603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:29.227 [2024-12-06 07:09:01.595612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:29.227 [2024-12-06 07:09:01.595628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:29.227 [2024-12-06 07:09:01.595653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:29.227 [2024-12-06 07:09:01.595679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:29.227 [2024-12-06 07:09:01.595703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:29.227 [2024-12-06 07:09:01.595720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:29.227 [2024-12-06 07:09:01.595729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:29.227 [2024-12-06 07:09:01.595996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:29.227 [2024-12-06 07:09:01.596041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:29.227 [2024-12-06 07:09:01.596077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:29.227 [2024-12-06 07:09:01.596187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:29.227 [2024-12-06 07:09:01.596235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:29.227 [2024-12-06 07:09:01.596272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:29.227 [2024-12-06 07:09:01.596306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:29.227 [2024-12-06 07:09:01.596466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:29.227 [2024-12-06 07:09:01.596518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:29.227 [2024-12-06 07:09:01.596558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.596595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:29.227 [2024-12-06 07:09:01.596788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:29.227 [2024-12-06 07:09:01.596832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.596932] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:29.227 [2024-12-06 07:09:01.597066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:29.227 [2024-12-06 07:09:01.597130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:29.227 [2024-12-06 07:09:01.597230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:29.227 [2024-12-06 07:09:01.597277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:29.227 [2024-12-06 07:09:01.597292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:29.227 [2024-12-06 07:09:01.597302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:29.227 [2024-12-06 07:09:01.597311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:29.227 [2024-12-06 07:09:01.597320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:29.227 [2024-12-06 07:09:01.597330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:29.227 [2024-12-06 07:09:01.597342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:29.227 [2024-12-06 07:09:01.597354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:29.227 [2024-12-06 07:09:01.597373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:29.227 [2024-12-06 07:09:01.597383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:29.227 [2024-12-06 07:09:01.597393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:29.227 [2024-12-06 07:09:01.597403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:29.228 [2024-12-06 07:09:01.597413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:29.228 [2024-12-06 07:09:01.597424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:29.228 [2024-12-06 07:09:01.597433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:29.228 [2024-12-06 07:09:01.597443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:29.228 [2024-12-06 07:09:01.597453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:29.228 [2024-12-06 07:09:01.597463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:29.228 [2024-12-06 07:09:01.597515] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:29.228 [2024-12-06 07:09:01.597526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:29.228 [2024-12-06 07:09:01.597547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:29.228 [2024-12-06 07:09:01.597557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:29.228 [2024-12-06 07:09:01.597567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:29.228 [2024-12-06 07:09:01.597580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.597590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:29.228 [2024-12-06 07:09:01.597601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:42:29.228 [2024-12-06 07:09:01.597611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.626296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.626365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:29.228 [2024-12-06 07:09:01.626399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.582 ms 00:42:29.228 [2024-12-06 07:09:01.626414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.626516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.626533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:29.228 [2024-12-06 07:09:01.626544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:42:29.228 [2024-12-06 07:09:01.626554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.669144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.669191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:29.228 [2024-12-06 07:09:01.669223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.501 ms 00:42:29.228 [2024-12-06 07:09:01.669233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.669286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.669303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:29.228 [2024-12-06 07:09:01.669320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:29.228 [2024-12-06 07:09:01.669344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.669709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.669727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:29.228 [2024-12-06 07:09:01.669770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:42:29.228 [2024-12-06 07:09:01.669789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.669951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.669970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:29.228 [2024-12-06 07:09:01.669987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:42:29.228 [2024-12-06 07:09:01.669996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.684161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.684414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:29.228 [2024-12-06 07:09:01.684443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:42:29.228 [2024-12-06 07:09:01.684466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.698117] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:42:29.228 [2024-12-06 07:09:01.698155] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:29.228 [2024-12-06 07:09:01.698187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.698197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:29.228 [2024-12-06 07:09:01.698207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.588 ms 00:42:29.228 [2024-12-06 07:09:01.698216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.722176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.722213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:29.228 [2024-12-06 07:09:01.722244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.917 ms 00:42:29.228 [2024-12-06 07:09:01.722254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.735263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.735300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:29.228 [2024-12-06 07:09:01.735330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.958 ms 00:42:29.228 [2024-12-06 07:09:01.735339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.748140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.748347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:29.228 [2024-12-06 07:09:01.748405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.761 ms 00:42:29.228 [2024-12-06 07:09:01.748418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.749274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.749311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:29.228 [2024-12-06 07:09:01.749329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:42:29.228 [2024-12-06 07:09:01.749340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.228 [2024-12-06 07:09:01.809490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.228 [2024-12-06 07:09:01.809590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:29.228 [2024-12-06 07:09:01.809661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.108 ms 00:42:29.228 [2024-12-06 07:09:01.809672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.505 [2024-12-06 07:09:01.823146] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:29.505 [2024-12-06 07:09:01.825345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.825538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:29.506 [2024-12-06 07:09:01.825564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.536 ms 00:42:29.506 [2024-12-06 07:09:01.825575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.825696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.825716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:29.506 [2024-12-06 07:09:01.825766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:29.506 [2024-12-06 07:09:01.825785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.827315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.827347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:29.506 [2024-12-06 07:09:01.827376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:42:29.506 [2024-12-06 07:09:01.827386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.827419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.827433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:29.506 [2024-12-06 07:09:01.827445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:29.506 [2024-12-06 07:09:01.827454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.827497] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:29.506 [2024-12-06 07:09:01.827527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.827537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:29.506 [2024-12-06 07:09:01.827548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:42:29.506 [2024-12-06 07:09:01.827557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.853180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.853219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:29.506 [2024-12-06 07:09:01.853255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.584 ms 00:42:29.506 [2024-12-06 07:09:01.853266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.853338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.506 [2024-12-06 07:09:01.853354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:29.506 [2024-12-06 07:09:01.853365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:42:29.506 [2024-12-06 07:09:01.853374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.506 [2024-12-06 07:09:01.854568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.148 ms, result 0 00:42:30.482  [2024-12-06T07:09:04.451Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T07:09:05.387Z] Copying: 42/1024 [MB] (23 MBps) [2024-12-06T07:09:06.317Z] Copying: 64/1024 [MB] (21 MBps) [2024-12-06T07:09:07.264Z] Copying: 86/1024 [MB] (22 MBps) [2024-12-06T07:09:08.194Z] Copying: 108/1024 [MB] (21 MBps) [2024-12-06T07:09:09.128Z] Copying: 129/1024 [MB] (21 MBps) [2024-12-06T07:09:10.065Z] Copying: 151/1024 [MB] (21 MBps) [2024-12-06T07:09:11.446Z] Copying: 173/1024 [MB] (21 MBps) [2024-12-06T07:09:12.384Z] Copying: 195/1024 [MB] (22 MBps) [2024-12-06T07:09:13.323Z] Copying: 218/1024 [MB] (22 MBps) [2024-12-06T07:09:14.258Z] Copying: 240/1024 [MB] (21 MBps) [2024-12-06T07:09:15.194Z] Copying: 262/1024 [MB] (22 MBps) [2024-12-06T07:09:16.127Z] Copying: 285/1024 [MB] (22 MBps) [2024-12-06T07:09:17.063Z] Copying: 307/1024 [MB] (22 MBps) [2024-12-06T07:09:18.437Z] Copying: 330/1024 [MB] (22 MBps) [2024-12-06T07:09:19.372Z] Copying: 352/1024 [MB] (22 MBps) [2024-12-06T07:09:20.308Z] Copying: 375/1024 [MB] (22 MBps) [2024-12-06T07:09:21.246Z] Copying: 397/1024 [MB] (22 MBps) [2024-12-06T07:09:22.186Z] Copying: 419/1024 [MB] (22 MBps) [2024-12-06T07:09:23.123Z] Copying: 443/1024 [MB] (23 MBps) [2024-12-06T07:09:24.057Z] Copying: 467/1024 [MB] (23 MBps) [2024-12-06T07:09:25.434Z] Copying: 490/1024 [MB] (23 MBps) [2024-12-06T07:09:26.371Z] Copying: 513/1024 [MB] (23 MBps) [2024-12-06T07:09:27.308Z] Copying: 536/1024 [MB] (23 MBps) [2024-12-06T07:09:28.259Z] Copying: 559/1024 [MB] (22 MBps) [2024-12-06T07:09:29.211Z] Copying: 582/1024 [MB] (22 MBps) [2024-12-06T07:09:30.150Z] Copying: 604/1024 [MB] (22 MBps) [2024-12-06T07:09:31.087Z] Copying: 627/1024 [MB] (22 MBps) [2024-12-06T07:09:32.466Z] Copying: 649/1024 [MB] (22 MBps) [2024-12-06T07:09:33.404Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-06T07:09:34.341Z] Copying: 695/1024 [MB] (22 MBps) [2024-12-06T07:09:35.277Z] Copying: 717/1024 [MB] (22 MBps) [2024-12-06T07:09:36.214Z] Copying: 740/1024 [MB] (22 MBps) [2024-12-06T07:09:37.154Z] Copying: 762/1024 [MB] (22 MBps) [2024-12-06T07:09:38.092Z] Copying: 785/1024 [MB] (22 MBps) [2024-12-06T07:09:39.472Z] Copying: 807/1024 [MB] (22 MBps) [2024-12-06T07:09:40.042Z] Copying: 830/1024 [MB] (22 MBps) [2024-12-06T07:09:41.421Z] Copying: 852/1024 [MB] (22 MBps) [2024-12-06T07:09:42.359Z] Copying: 875/1024 [MB] (22 MBps) [2024-12-06T07:09:43.297Z] Copying: 898/1024 [MB] (22 MBps) [2024-12-06T07:09:44.234Z] Copying: 921/1024 [MB] (23 MBps) [2024-12-06T07:09:45.171Z] Copying: 944/1024 [MB] (22 MBps) [2024-12-06T07:09:46.104Z] Copying: 966/1024 [MB] (22 MBps) [2024-12-06T07:09:47.477Z] Copying: 989/1024 [MB] (22 MBps) [2024-12-06T07:09:47.736Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-06T07:09:47.994Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 07:09:47.784197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.403 [2024-12-06 07:09:47.784264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:15.403 [2024-12-06 07:09:47.784312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:15.404 [2024-12-06 07:09:47.784323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.784352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:15.404 [2024-12-06 07:09:47.788259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.788440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:15.404 [2024-12-06 07:09:47.788563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:43:15.404 [2024-12-06 07:09:47.788703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.789004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.789066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:15.404 [2024-12-06 07:09:47.789193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:43:15.404 [2024-12-06 07:09:47.789322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.793661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.793922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:15.404 [2024-12-06 07:09:47.794058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:43:15.404 [2024-12-06 07:09:47.794107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.800256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.800443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:15.404 [2024-12-06 07:09:47.800577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.018 ms 00:43:15.404 [2024-12-06 07:09:47.800717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.827232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.827416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:15.404 [2024-12-06 07:09:47.827571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.379 ms 00:43:15.404 [2024-12-06 07:09:47.827621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.842774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.842857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:15.404 [2024-12-06 07:09:47.842958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:43:15.404 [2024-12-06 07:09:47.843030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.962565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.962796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:15.404 [2024-12-06 07:09:47.962907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.457 ms 00:43:15.404 [2024-12-06 07:09:47.963008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.404 [2024-12-06 07:09:47.988629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.404 [2024-12-06 07:09:47.988902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:15.404 [2024-12-06 07:09:47.989033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.560 ms 00:43:15.404 [2024-12-06 07:09:47.989083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.663 [2024-12-06 07:09:48.014938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.663 [2024-12-06 07:09:48.015103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:15.663 [2024-12-06 07:09:48.015250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.653 ms 00:43:15.663 [2024-12-06 07:09:48.015271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.663 [2024-12-06 07:09:48.039685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.663 [2024-12-06 07:09:48.039733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:15.663 [2024-12-06 07:09:48.039747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.370 ms 00:43:15.663 [2024-12-06 07:09:48.039756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.663 [2024-12-06 07:09:48.063916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.663 [2024-12-06 07:09:48.063953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:15.663 [2024-12-06 07:09:48.063983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.094 ms 00:43:15.663 [2024-12-06 07:09:48.063992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.663 [2024-12-06 07:09:48.064031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:15.663 [2024-12-06 07:09:48.064052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:43:15.663 [2024-12-06 07:09:48.064065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:15.663 [2024-12-06 07:09:48.064424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.064716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.065993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:15.664 [2024-12-06 07:09:48.066177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:15.664 [2024-12-06 07:09:48.066201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbb34d5e-3007-41d5-8e2b-9cd66c8b840b 00:43:15.664 [2024-12-06 07:09:48.066211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:43:15.664 [2024-12-06 07:09:48.066220] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:43:15.664 [2024-12-06 07:09:48.066229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:43:15.664 [2024-12-06 07:09:48.066239] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:43:15.664 [2024-12-06 07:09:48.066256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:15.664 [2024-12-06 07:09:48.066276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:15.664 [2024-12-06 07:09:48.066286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:15.664 [2024-12-06 07:09:48.066295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:15.664 [2024-12-06 07:09:48.066303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:15.664 [2024-12-06 07:09:48.066313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.664 [2024-12-06 07:09:48.066323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:15.664 [2024-12-06 07:09:48.066332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:43:15.664 [2024-12-06 07:09:48.066356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.664 [2024-12-06 07:09:48.079575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.664 [2024-12-06 07:09:48.079609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:15.664 [2024-12-06 07:09:48.079629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.195 ms 00:43:15.664 [2024-12-06 07:09:48.079638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.664 [2024-12-06 07:09:48.080224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:15.664 [2024-12-06 07:09:48.080262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:15.664 [2024-12-06 07:09:48.080277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:43:15.664 [2024-12-06 07:09:48.080286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.664 [2024-12-06 07:09:48.113533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.664 [2024-12-06 07:09:48.113577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:15.664 [2024-12-06 07:09:48.113591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.664 [2024-12-06 07:09:48.113600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.664 [2024-12-06 07:09:48.113650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.664 [2024-12-06 07:09:48.113665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:15.664 [2024-12-06 07:09:48.113675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.665 [2024-12-06 07:09:48.113683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.665 [2024-12-06 07:09:48.113814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.665 [2024-12-06 07:09:48.113839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:15.665 [2024-12-06 07:09:48.113868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.665 [2024-12-06 07:09:48.113886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.665 [2024-12-06 07:09:48.113921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.665 [2024-12-06 07:09:48.113958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:15.665 [2024-12-06 07:09:48.114008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.665 [2024-12-06 07:09:48.114027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.665 [2024-12-06 07:09:48.192403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.665 [2024-12-06 07:09:48.192479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:15.665 [2024-12-06 07:09:48.192512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.665 [2024-12-06 07:09:48.192521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.258781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.258830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:15.924 [2024-12-06 07:09:48.258846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.258856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.258941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.258958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:15.924 [2024-12-06 07:09:48.258969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.258985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.259121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:15.924 [2024-12-06 07:09:48.259132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.259143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.259291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:15.924 [2024-12-06 07:09:48.259303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.259313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.259388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:15.924 [2024-12-06 07:09:48.259400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.259410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.259477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:15.924 [2024-12-06 07:09:48.259488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.259498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:15.924 [2024-12-06 07:09:48.259571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:15.924 [2024-12-06 07:09:48.259583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:15.924 [2024-12-06 07:09:48.259593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:15.924 [2024-12-06 07:09:48.259750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.510 ms, result 0 00:43:16.491 00:43:16.491 00:43:16.491 07:09:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:43:18.392 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78746 00:43:18.392 07:09:50 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78746 ']' 00:43:18.392 07:09:50 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78746 00:43:18.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78746) - No such process 00:43:18.392 Process with pid 78746 is not found 00:43:18.392 07:09:50 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78746 is not found' 00:43:18.392 Remove shared memory files 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:43:18.392 07:09:50 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:43:18.392 ************************************ 00:43:18.392 END TEST ftl_restore 00:43:18.392 ************************************ 00:43:18.392 00:43:18.392 real 3m30.126s 00:43:18.392 user 3m17.115s 00:43:18.392 sys 0m14.497s 00:43:18.392 07:09:50 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:18.392 07:09:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:43:18.392 07:09:50 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:43:18.392 07:09:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:18.392 07:09:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:18.392 07:09:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:18.392 ************************************ 00:43:18.392 START TEST ftl_dirty_shutdown 00:43:18.392 ************************************ 00:43:18.392 07:09:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:43:18.650 * Looking for test storage... 00:43:18.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:43:18.650 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:18.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.651 --rc genhtml_branch_coverage=1 00:43:18.651 --rc genhtml_function_coverage=1 00:43:18.651 --rc genhtml_legend=1 00:43:18.651 --rc geninfo_all_blocks=1 00:43:18.651 --rc geninfo_unexecuted_blocks=1 00:43:18.651 00:43:18.651 ' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:18.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.651 --rc genhtml_branch_coverage=1 00:43:18.651 --rc genhtml_function_coverage=1 00:43:18.651 --rc genhtml_legend=1 00:43:18.651 --rc geninfo_all_blocks=1 00:43:18.651 --rc geninfo_unexecuted_blocks=1 00:43:18.651 00:43:18.651 ' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:18.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.651 --rc genhtml_branch_coverage=1 00:43:18.651 --rc genhtml_function_coverage=1 00:43:18.651 --rc genhtml_legend=1 00:43:18.651 --rc geninfo_all_blocks=1 00:43:18.651 --rc geninfo_unexecuted_blocks=1 00:43:18.651 00:43:18.651 ' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:18.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.651 --rc genhtml_branch_coverage=1 00:43:18.651 --rc genhtml_function_coverage=1 00:43:18.651 --rc genhtml_legend=1 00:43:18.651 --rc geninfo_all_blocks=1 00:43:18.651 --rc geninfo_unexecuted_blocks=1 00:43:18.651 00:43:18.651 ' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80904 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80904 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80904 ']' 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:18.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:18.651 07:09:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:43:18.910 [2024-12-06 07:09:51.310043] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:43:18.910 [2024-12-06 07:09:51.310480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80904 ] 00:43:18.910 [2024-12-06 07:09:51.487892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:19.169 [2024-12-06 07:09:51.572258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:43:19.738 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:19.997 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:43:20.255 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:20.255 { 00:43:20.255 "name": "nvme0n1", 00:43:20.255 "aliases": [ 00:43:20.255 "ae4c495b-15cc-4968-8b61-9953e3a0f6d4" 00:43:20.255 ], 00:43:20.255 "product_name": "NVMe disk", 00:43:20.255 "block_size": 4096, 00:43:20.255 "num_blocks": 1310720, 00:43:20.255 "uuid": "ae4c495b-15cc-4968-8b61-9953e3a0f6d4", 00:43:20.255 "numa_id": -1, 00:43:20.255 "assigned_rate_limits": { 00:43:20.255 "rw_ios_per_sec": 0, 00:43:20.255 "rw_mbytes_per_sec": 0, 00:43:20.255 "r_mbytes_per_sec": 0, 00:43:20.255 "w_mbytes_per_sec": 0 00:43:20.255 }, 00:43:20.255 "claimed": true, 00:43:20.255 "claim_type": "read_many_write_one", 00:43:20.255 "zoned": false, 00:43:20.255 "supported_io_types": { 00:43:20.255 "read": true, 00:43:20.255 "write": true, 00:43:20.255 "unmap": true, 00:43:20.255 "flush": true, 00:43:20.255 "reset": true, 00:43:20.255 "nvme_admin": true, 00:43:20.255 "nvme_io": true, 00:43:20.255 "nvme_io_md": false, 00:43:20.255 "write_zeroes": true, 00:43:20.255 "zcopy": false, 00:43:20.255 "get_zone_info": false, 00:43:20.255 "zone_management": false, 00:43:20.255 "zone_append": false, 00:43:20.255 "compare": true, 00:43:20.255 "compare_and_write": false, 00:43:20.255 "abort": true, 00:43:20.255 "seek_hole": false, 00:43:20.255 "seek_data": false, 00:43:20.255 "copy": true, 00:43:20.255 "nvme_iov_md": false 00:43:20.255 }, 00:43:20.255 "driver_specific": { 00:43:20.255 "nvme": [ 00:43:20.255 { 00:43:20.255 "pci_address": "0000:00:11.0", 00:43:20.255 "trid": { 00:43:20.255 "trtype": "PCIe", 00:43:20.255 "traddr": "0000:00:11.0" 00:43:20.255 }, 00:43:20.255 "ctrlr_data": { 00:43:20.255 "cntlid": 0, 00:43:20.255 "vendor_id": "0x1b36", 00:43:20.255 "model_number": "QEMU NVMe Ctrl", 00:43:20.256 "serial_number": "12341", 00:43:20.256 "firmware_revision": "8.0.0", 00:43:20.256 "subnqn": "nqn.2019-08.org.qemu:12341", 00:43:20.256 "oacs": { 00:43:20.256 "security": 0, 00:43:20.256 "format": 1, 00:43:20.256 "firmware": 0, 00:43:20.256 "ns_manage": 1 00:43:20.256 }, 00:43:20.256 "multi_ctrlr": false, 00:43:20.256 "ana_reporting": false 00:43:20.256 }, 00:43:20.256 "vs": { 00:43:20.256 "nvme_version": "1.4" 00:43:20.256 }, 00:43:20.256 "ns_data": { 00:43:20.256 "id": 1, 00:43:20.256 "can_share": false 00:43:20.256 } 00:43:20.256 } 00:43:20.256 ], 00:43:20.256 "mp_policy": "active_passive" 00:43:20.256 } 00:43:20.256 } 00:43:20.256 ]' 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:20.256 07:09:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:43:20.847 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef 00:43:20.847 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:43:20.847 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa42ec0c-d4ac-48dd-8c1e-2a439c8579ef 00:43:20.847 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:43:21.105 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f9774b37-3afb-4768-97a2-8597fc972b2e 00:43:21.106 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f9774b37-3afb-4768-97a2-8597fc972b2e 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:43:21.365 07:09:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.366 07:09:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.366 07:09:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:21.366 07:09:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:21.366 07:09:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:21.366 07:09:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:21.626 { 00:43:21.626 "name": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:21.626 "aliases": [ 00:43:21.626 "lvs/nvme0n1p0" 00:43:21.626 ], 00:43:21.626 "product_name": "Logical Volume", 00:43:21.626 "block_size": 4096, 00:43:21.626 "num_blocks": 26476544, 00:43:21.626 "uuid": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:21.626 "assigned_rate_limits": { 00:43:21.626 "rw_ios_per_sec": 0, 00:43:21.626 "rw_mbytes_per_sec": 0, 00:43:21.626 "r_mbytes_per_sec": 0, 00:43:21.626 "w_mbytes_per_sec": 0 00:43:21.626 }, 00:43:21.626 "claimed": false, 00:43:21.626 "zoned": false, 00:43:21.626 "supported_io_types": { 00:43:21.626 "read": true, 00:43:21.626 "write": true, 00:43:21.626 "unmap": true, 00:43:21.626 "flush": false, 00:43:21.626 "reset": true, 00:43:21.626 "nvme_admin": false, 00:43:21.626 "nvme_io": false, 00:43:21.626 "nvme_io_md": false, 00:43:21.626 "write_zeroes": true, 00:43:21.626 "zcopy": false, 00:43:21.626 "get_zone_info": false, 00:43:21.626 "zone_management": false, 00:43:21.626 "zone_append": false, 00:43:21.626 "compare": false, 00:43:21.626 "compare_and_write": false, 00:43:21.626 "abort": false, 00:43:21.626 "seek_hole": true, 00:43:21.626 "seek_data": true, 00:43:21.626 "copy": false, 00:43:21.626 "nvme_iov_md": false 00:43:21.626 }, 00:43:21.626 "driver_specific": { 00:43:21.626 "lvol": { 00:43:21.626 "lvol_store_uuid": "f9774b37-3afb-4768-97a2-8597fc972b2e", 00:43:21.626 "base_bdev": "nvme0n1", 00:43:21.626 "thin_provision": true, 00:43:21.626 "num_allocated_clusters": 0, 00:43:21.626 "snapshot": false, 00:43:21.626 "clone": false, 00:43:21.626 "esnap_clone": false 00:43:21.626 } 00:43:21.626 } 00:43:21.626 } 00:43:21.626 ]' 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:43:21.626 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:22.195 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.453 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:22.454 { 00:43:22.454 "name": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:22.454 "aliases": [ 00:43:22.454 "lvs/nvme0n1p0" 00:43:22.454 ], 00:43:22.454 "product_name": "Logical Volume", 00:43:22.454 "block_size": 4096, 00:43:22.454 "num_blocks": 26476544, 00:43:22.454 "uuid": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:22.454 "assigned_rate_limits": { 00:43:22.454 "rw_ios_per_sec": 0, 00:43:22.454 "rw_mbytes_per_sec": 0, 00:43:22.454 "r_mbytes_per_sec": 0, 00:43:22.454 "w_mbytes_per_sec": 0 00:43:22.454 }, 00:43:22.454 "claimed": false, 00:43:22.454 "zoned": false, 00:43:22.454 "supported_io_types": { 00:43:22.454 "read": true, 00:43:22.454 "write": true, 00:43:22.454 "unmap": true, 00:43:22.454 "flush": false, 00:43:22.454 "reset": true, 00:43:22.454 "nvme_admin": false, 00:43:22.454 "nvme_io": false, 00:43:22.454 "nvme_io_md": false, 00:43:22.454 "write_zeroes": true, 00:43:22.454 "zcopy": false, 00:43:22.454 "get_zone_info": false, 00:43:22.454 "zone_management": false, 00:43:22.454 "zone_append": false, 00:43:22.454 "compare": false, 00:43:22.454 "compare_and_write": false, 00:43:22.454 "abort": false, 00:43:22.454 "seek_hole": true, 00:43:22.454 "seek_data": true, 00:43:22.454 "copy": false, 00:43:22.454 "nvme_iov_md": false 00:43:22.454 }, 00:43:22.454 "driver_specific": { 00:43:22.454 "lvol": { 00:43:22.454 "lvol_store_uuid": "f9774b37-3afb-4768-97a2-8597fc972b2e", 00:43:22.454 "base_bdev": "nvme0n1", 00:43:22.454 "thin_provision": true, 00:43:22.454 "num_allocated_clusters": 0, 00:43:22.454 "snapshot": false, 00:43:22.454 "clone": false, 00:43:22.454 "esnap_clone": false 00:43:22.454 } 00:43:22.454 } 00:43:22.454 } 00:43:22.454 ]' 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:43:22.454 07:09:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:22.712 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59c0ffc4-481d-4189-9ffa-061730b294d1 00:43:22.970 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:22.970 { 00:43:22.970 "name": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:22.970 "aliases": [ 00:43:22.970 "lvs/nvme0n1p0" 00:43:22.970 ], 00:43:22.970 "product_name": "Logical Volume", 00:43:22.970 "block_size": 4096, 00:43:22.970 "num_blocks": 26476544, 00:43:22.970 "uuid": "59c0ffc4-481d-4189-9ffa-061730b294d1", 00:43:22.970 "assigned_rate_limits": { 00:43:22.970 "rw_ios_per_sec": 0, 00:43:22.970 "rw_mbytes_per_sec": 0, 00:43:22.970 "r_mbytes_per_sec": 0, 00:43:22.970 "w_mbytes_per_sec": 0 00:43:22.970 }, 00:43:22.970 "claimed": false, 00:43:22.970 "zoned": false, 00:43:22.970 "supported_io_types": { 00:43:22.970 "read": true, 00:43:22.970 "write": true, 00:43:22.970 "unmap": true, 00:43:22.970 "flush": false, 00:43:22.970 "reset": true, 00:43:22.970 "nvme_admin": false, 00:43:22.970 "nvme_io": false, 00:43:22.970 "nvme_io_md": false, 00:43:22.970 "write_zeroes": true, 00:43:22.970 "zcopy": false, 00:43:22.970 "get_zone_info": false, 00:43:22.970 "zone_management": false, 00:43:22.970 "zone_append": false, 00:43:22.970 "compare": false, 00:43:22.970 "compare_and_write": false, 00:43:22.970 "abort": false, 00:43:22.970 "seek_hole": true, 00:43:22.970 "seek_data": true, 00:43:22.970 "copy": false, 00:43:22.970 "nvme_iov_md": false 00:43:22.970 }, 00:43:22.970 "driver_specific": { 00:43:22.970 "lvol": { 00:43:22.970 "lvol_store_uuid": "f9774b37-3afb-4768-97a2-8597fc972b2e", 00:43:22.970 "base_bdev": "nvme0n1", 00:43:22.970 "thin_provision": true, 00:43:22.970 "num_allocated_clusters": 0, 00:43:22.970 "snapshot": false, 00:43:22.970 "clone": false, 00:43:22.970 "esnap_clone": false 00:43:22.970 } 00:43:22.970 } 00:43:22.970 } 00:43:22.970 ]' 00:43:22.970 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:22.970 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:22.970 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:22.970 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 59c0ffc4-481d-4189-9ffa-061730b294d1 --l2p_dram_limit 10' 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:43:22.971 07:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 59c0ffc4-481d-4189-9ffa-061730b294d1 --l2p_dram_limit 10 -c nvc0n1p0 00:43:23.230 [2024-12-06 07:09:55.787550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.230 [2024-12-06 07:09:55.787772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:23.230 [2024-12-06 07:09:55.787907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:23.230 [2024-12-06 07:09:55.787956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.230 [2024-12-06 07:09:55.788171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.230 [2024-12-06 07:09:55.788306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:23.230 [2024-12-06 07:09:55.788462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:43:23.230 [2024-12-06 07:09:55.788510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.230 [2024-12-06 07:09:55.788686] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:23.230 [2024-12-06 07:09:55.789690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:23.230 [2024-12-06 07:09:55.789908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.230 [2024-12-06 07:09:55.790020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:23.230 [2024-12-06 07:09:55.790071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:43:23.230 [2024-12-06 07:09:55.790173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.230 [2024-12-06 07:09:55.790442] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3095e218-c1bf-4965-bf6a-aaea9793f2eb 00:43:23.230 [2024-12-06 07:09:55.791555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.230 [2024-12-06 07:09:55.791741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:43:23.230 [2024-12-06 07:09:55.791876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:43:23.230 [2024-12-06 07:09:55.791929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.230 [2024-12-06 07:09:55.796802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.230 [2024-12-06 07:09:55.797013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:23.230 [2024-12-06 07:09:55.797130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:43:23.230 [2024-12-06 07:09:55.797180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.797320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.797376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:23.231 [2024-12-06 07:09:55.797414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:43:23.231 [2024-12-06 07:09:55.797513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.797697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.797737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:23.231 [2024-12-06 07:09:55.797754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:43:23.231 [2024-12-06 07:09:55.797766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.797797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:23.231 [2024-12-06 07:09:55.801719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.801752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:23.231 [2024-12-06 07:09:55.801769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:43:23.231 [2024-12-06 07:09:55.801779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.801819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.801833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:23.231 [2024-12-06 07:09:55.801845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:23.231 [2024-12-06 07:09:55.801854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.801905] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:43:23.231 [2024-12-06 07:09:55.802036] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:23.231 [2024-12-06 07:09:55.802056] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:23.231 [2024-12-06 07:09:55.802069] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:23.231 [2024-12-06 07:09:55.802083] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802093] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:23.231 [2024-12-06 07:09:55.802114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:23.231 [2024-12-06 07:09:55.802129] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:23.231 [2024-12-06 07:09:55.802138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:23.231 [2024-12-06 07:09:55.802149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.802169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:23.231 [2024-12-06 07:09:55.802182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:43:23.231 [2024-12-06 07:09:55.802190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.802269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.231 [2024-12-06 07:09:55.802282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:23.231 [2024-12-06 07:09:55.802294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:43:23.231 [2024-12-06 07:09:55.802304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.231 [2024-12-06 07:09:55.802408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:23.231 [2024-12-06 07:09:55.802426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:23.231 [2024-12-06 07:09:55.802439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:23.231 [2024-12-06 07:09:55.802467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:23.231 [2024-12-06 07:09:55.802499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:23.231 [2024-12-06 07:09:55.802517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:23.231 [2024-12-06 07:09:55.802526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:23.231 [2024-12-06 07:09:55.802536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:23.231 [2024-12-06 07:09:55.802545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:23.231 [2024-12-06 07:09:55.802555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:23.231 [2024-12-06 07:09:55.802563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:23.231 [2024-12-06 07:09:55.802584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:23.231 [2024-12-06 07:09:55.802613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:23.231 [2024-12-06 07:09:55.802640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:23.231 [2024-12-06 07:09:55.802668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:23.231 [2024-12-06 07:09:55.802696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:23.231 [2024-12-06 07:09:55.802784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:23.231 [2024-12-06 07:09:55.802805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:23.231 [2024-12-06 07:09:55.802815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:23.231 [2024-12-06 07:09:55.802826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:23.231 [2024-12-06 07:09:55.802834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:23.231 [2024-12-06 07:09:55.802845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:23.231 [2024-12-06 07:09:55.802854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:23.231 [2024-12-06 07:09:55.802877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:23.231 [2024-12-06 07:09:55.802888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802897] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:23.231 [2024-12-06 07:09:55.802909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:23.231 [2024-12-06 07:09:55.802918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:23.231 [2024-12-06 07:09:55.802929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:23.231 [2024-12-06 07:09:55.802939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:23.231 [2024-12-06 07:09:55.802952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:23.231 [2024-12-06 07:09:55.802961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:23.231 [2024-12-06 07:09:55.802973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:23.231 [2024-12-06 07:09:55.802982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:23.231 [2024-12-06 07:09:55.802992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:23.231 [2024-12-06 07:09:55.803003] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:23.231 [2024-12-06 07:09:55.803020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:23.231 [2024-12-06 07:09:55.803031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:23.231 [2024-12-06 07:09:55.803058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:23.231 [2024-12-06 07:09:55.803083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:23.231 [2024-12-06 07:09:55.803095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:23.231 [2024-12-06 07:09:55.803105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:23.231 [2024-12-06 07:09:55.803119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:23.231 [2024-12-06 07:09:55.803129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:23.231 [2024-12-06 07:09:55.803155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:23.231 [2024-12-06 07:09:55.803165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:23.231 [2024-12-06 07:09:55.803193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:23.232 [2024-12-06 07:09:55.803247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:23.232 [2024-12-06 07:09:55.803261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:23.232 [2024-12-06 07:09:55.803284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:23.232 [2024-12-06 07:09:55.803295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:23.232 [2024-12-06 07:09:55.803307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:23.232 [2024-12-06 07:09:55.803318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:23.232 [2024-12-06 07:09:55.803331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:23.232 [2024-12-06 07:09:55.803342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:43:23.232 [2024-12-06 07:09:55.803353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:23.232 [2024-12-06 07:09:55.803402] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:43:23.232 [2024-12-06 07:09:55.803422] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:43:25.766 [2024-12-06 07:09:58.093855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.766 [2024-12-06 07:09:58.093921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:43:25.766 [2024-12-06 07:09:58.093950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2290.467 ms 00:43:25.766 [2024-12-06 07:09:58.093970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.766 [2024-12-06 07:09:58.120683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.766 [2024-12-06 07:09:58.120766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:25.766 [2024-12-06 07:09:58.120807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.426 ms 00:43:25.766 [2024-12-06 07:09:58.120829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.766 [2024-12-06 07:09:58.121047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.766 [2024-12-06 07:09:58.121079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:25.766 [2024-12-06 07:09:58.121101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:43:25.766 [2024-12-06 07:09:58.121131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.766 [2024-12-06 07:09:58.153861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.766 [2024-12-06 07:09:58.153915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:25.766 [2024-12-06 07:09:58.153941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.656 ms 00:43:25.766 [2024-12-06 07:09:58.153963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.766 [2024-12-06 07:09:58.154023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.154056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:25.767 [2024-12-06 07:09:58.154077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:25.767 [2024-12-06 07:09:58.154124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.154555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.154629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:25.767 [2024-12-06 07:09:58.154672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:43:25.767 [2024-12-06 07:09:58.154697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.154928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.154963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:25.767 [2024-12-06 07:09:58.154989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:43:25.767 [2024-12-06 07:09:58.155014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.169878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.169925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:25.767 [2024-12-06 07:09:58.169950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.824 ms 00:43:25.767 [2024-12-06 07:09:58.169973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.194242] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:25.767 [2024-12-06 07:09:58.196841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.196880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:25.767 [2024-12-06 07:09:58.196911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.742 ms 00:43:25.767 [2024-12-06 07:09:58.196931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.257350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.257418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:43:25.767 [2024-12-06 07:09:58.257450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.368 ms 00:43:25.767 [2024-12-06 07:09:58.257468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.257760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.257796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:25.767 [2024-12-06 07:09:58.257824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:43:25.767 [2024-12-06 07:09:58.257843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.282697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.282758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:43:25.767 [2024-12-06 07:09:58.282789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.766 ms 00:43:25.767 [2024-12-06 07:09:58.282811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.306962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.307000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:43:25.767 [2024-12-06 07:09:58.307028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.086 ms 00:43:25.767 [2024-12-06 07:09:58.307047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:25.767 [2024-12-06 07:09:58.307795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:25.767 [2024-12-06 07:09:58.307834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:25.767 [2024-12-06 07:09:58.307861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:43:25.767 [2024-12-06 07:09:58.307886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.379693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.026 [2024-12-06 07:09:58.379752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:43:26.026 [2024-12-06 07:09:58.379785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.744 ms 00:43:26.026 [2024-12-06 07:09:58.379804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.404981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.026 [2024-12-06 07:09:58.405021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:43:26.026 [2024-12-06 07:09:58.405050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.063 ms 00:43:26.026 [2024-12-06 07:09:58.405071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.429608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.026 [2024-12-06 07:09:58.429647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:43:26.026 [2024-12-06 07:09:58.429674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.477 ms 00:43:26.026 [2024-12-06 07:09:58.429694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.454507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.026 [2024-12-06 07:09:58.454546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:26.026 [2024-12-06 07:09:58.454575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.734 ms 00:43:26.026 [2024-12-06 07:09:58.454594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.454663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.026 [2024-12-06 07:09:58.454687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:26.026 [2024-12-06 07:09:58.454730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:26.026 [2024-12-06 07:09:58.454753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.026 [2024-12-06 07:09:58.454903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:26.027 [2024-12-06 07:09:58.454933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:26.027 [2024-12-06 07:09:58.454957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:43:26.027 [2024-12-06 07:09:58.454974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:26.027 [2024-12-06 07:09:58.456430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2668.336 ms, result 0 00:43:26.027 { 00:43:26.027 "name": "ftl0", 00:43:26.027 "uuid": "3095e218-c1bf-4965-bf6a-aaea9793f2eb" 00:43:26.027 } 00:43:26.027 07:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:43:26.027 07:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:43:26.285 07:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:43:26.285 07:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:43:26.285 07:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:43:26.545 /dev/nbd0 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:43:26.545 1+0 records in 00:43:26.545 1+0 records out 00:43:26.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533274 s, 7.7 MB/s 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:43:26.545 07:09:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:43:26.805 [2024-12-06 07:09:59.158908] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:43:26.805 [2024-12-06 07:09:59.159294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81041 ] 00:43:26.805 [2024-12-06 07:09:59.336604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:27.065 [2024-12-06 07:09:59.479951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:28.446  [2024-12-06T07:10:02.039Z] Copying: 205/1024 [MB] (205 MBps) [2024-12-06T07:10:02.974Z] Copying: 410/1024 [MB] (205 MBps) [2024-12-06T07:10:03.911Z] Copying: 618/1024 [MB] (207 MBps) [2024-12-06T07:10:04.849Z] Copying: 814/1024 [MB] (196 MBps) [2024-12-06T07:10:05.108Z] Copying: 1001/1024 [MB] (186 MBps) [2024-12-06T07:10:06.046Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:43:33.455 00:43:33.455 07:10:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:43:35.361 07:10:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:43:35.361 [2024-12-06 07:10:07.611988] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:43:35.362 [2024-12-06 07:10:07.612410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81129 ] 00:43:35.362 [2024-12-06 07:10:07.782577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.362 [2024-12-06 07:10:07.901170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.739  [2024-12-06T07:10:10.267Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-06T07:10:11.201Z] Copying: 30/1024 [MB] (14 MBps) [2024-12-06T07:10:12.575Z] Copying: 44/1024 [MB] (13 MBps) [2024-12-06T07:10:13.142Z] Copying: 57/1024 [MB] (12 MBps) [2024-12-06T07:10:14.521Z] Copying: 69/1024 [MB] (12 MBps) [2024-12-06T07:10:15.459Z] Copying: 83/1024 [MB] (13 MBps) [2024-12-06T07:10:16.393Z] Copying: 98/1024 [MB] (15 MBps) [2024-12-06T07:10:17.329Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-06T07:10:18.267Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-06T07:10:19.203Z] Copying: 144/1024 [MB] (15 MBps) [2024-12-06T07:10:20.582Z] Copying: 159/1024 [MB] (14 MBps) [2024-12-06T07:10:21.151Z] Copying: 174/1024 [MB] (15 MBps) [2024-12-06T07:10:22.530Z] Copying: 189/1024 [MB] (15 MBps) [2024-12-06T07:10:23.467Z] Copying: 204/1024 [MB] (15 MBps) [2024-12-06T07:10:24.405Z] Copying: 220/1024 [MB] (15 MBps) [2024-12-06T07:10:25.342Z] Copying: 235/1024 [MB] (15 MBps) [2024-12-06T07:10:26.279Z] Copying: 251/1024 [MB] (15 MBps) [2024-12-06T07:10:27.219Z] Copying: 266/1024 [MB] (15 MBps) [2024-12-06T07:10:28.157Z] Copying: 281/1024 [MB] (15 MBps) [2024-12-06T07:10:29.537Z] Copying: 296/1024 [MB] (15 MBps) [2024-12-06T07:10:30.476Z] Copying: 312/1024 [MB] (15 MBps) [2024-12-06T07:10:31.414Z] Copying: 327/1024 [MB] (15 MBps) [2024-12-06T07:10:32.390Z] Copying: 343/1024 [MB] (15 MBps) [2024-12-06T07:10:33.340Z] Copying: 358/1024 [MB] (15 MBps) [2024-12-06T07:10:34.293Z] Copying: 374/1024 [MB] (15 MBps) [2024-12-06T07:10:35.230Z] Copying: 389/1024 [MB] (15 MBps) [2024-12-06T07:10:36.166Z] Copying: 405/1024 [MB] (15 MBps) [2024-12-06T07:10:37.542Z] Copying: 420/1024 [MB] (15 MBps) [2024-12-06T07:10:38.474Z] Copying: 435/1024 [MB] (15 MBps) [2024-12-06T07:10:39.411Z] Copying: 451/1024 [MB] (15 MBps) [2024-12-06T07:10:40.344Z] Copying: 466/1024 [MB] (15 MBps) [2024-12-06T07:10:41.279Z] Copying: 481/1024 [MB] (15 MBps) [2024-12-06T07:10:42.212Z] Copying: 497/1024 [MB] (15 MBps) [2024-12-06T07:10:43.148Z] Copying: 512/1024 [MB] (15 MBps) [2024-12-06T07:10:44.528Z] Copying: 527/1024 [MB] (15 MBps) [2024-12-06T07:10:45.465Z] Copying: 543/1024 [MB] (15 MBps) [2024-12-06T07:10:46.401Z] Copying: 558/1024 [MB] (15 MBps) [2024-12-06T07:10:47.336Z] Copying: 573/1024 [MB] (14 MBps) [2024-12-06T07:10:48.273Z] Copying: 588/1024 [MB] (15 MBps) [2024-12-06T07:10:49.211Z] Copying: 603/1024 [MB] (14 MBps) [2024-12-06T07:10:50.149Z] Copying: 618/1024 [MB] (14 MBps) [2024-12-06T07:10:51.529Z] Copying: 633/1024 [MB] (15 MBps) [2024-12-06T07:10:52.467Z] Copying: 648/1024 [MB] (15 MBps) [2024-12-06T07:10:53.404Z] Copying: 663/1024 [MB] (15 MBps) [2024-12-06T07:10:54.343Z] Copying: 679/1024 [MB] (15 MBps) [2024-12-06T07:10:55.289Z] Copying: 694/1024 [MB] (15 MBps) [2024-12-06T07:10:56.224Z] Copying: 709/1024 [MB] (15 MBps) [2024-12-06T07:10:57.158Z] Copying: 724/1024 [MB] (15 MBps) [2024-12-06T07:10:58.534Z] Copying: 740/1024 [MB] (15 MBps) [2024-12-06T07:10:59.470Z] Copying: 755/1024 [MB] (15 MBps) [2024-12-06T07:11:00.406Z] Copying: 770/1024 [MB] (15 MBps) [2024-12-06T07:11:01.341Z] Copying: 786/1024 [MB] (15 MBps) [2024-12-06T07:11:02.276Z] Copying: 801/1024 [MB] (15 MBps) [2024-12-06T07:11:03.212Z] Copying: 816/1024 [MB] (15 MBps) [2024-12-06T07:11:04.158Z] Copying: 832/1024 [MB] (15 MBps) [2024-12-06T07:11:05.563Z] Copying: 847/1024 [MB] (15 MBps) [2024-12-06T07:11:06.496Z] Copying: 862/1024 [MB] (15 MBps) [2024-12-06T07:11:07.431Z] Copying: 877/1024 [MB] (15 MBps) [2024-12-06T07:11:08.369Z] Copying: 893/1024 [MB] (15 MBps) [2024-12-06T07:11:09.308Z] Copying: 908/1024 [MB] (15 MBps) [2024-12-06T07:11:10.245Z] Copying: 924/1024 [MB] (15 MBps) [2024-12-06T07:11:11.185Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-06T07:11:12.565Z] Copying: 954/1024 [MB] (15 MBps) [2024-12-06T07:11:13.502Z] Copying: 969/1024 [MB] (15 MBps) [2024-12-06T07:11:14.440Z] Copying: 985/1024 [MB] (15 MBps) [2024-12-06T07:11:15.375Z] Copying: 999/1024 [MB] (14 MBps) [2024-12-06T07:11:15.941Z] Copying: 1014/1024 [MB] (14 MBps) [2024-12-06T07:11:16.877Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:44:44.286 00:44:44.286 07:11:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:44:44.286 07:11:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:44:44.286 07:11:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:44.545 [2024-12-06 07:11:17.027180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.545 [2024-12-06 07:11:17.027250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:44.545 [2024-12-06 07:11:17.027269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:44.545 [2024-12-06 07:11:17.027281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.545 [2024-12-06 07:11:17.027322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:44.545 [2024-12-06 07:11:17.030262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.545 [2024-12-06 07:11:17.030293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:44.545 [2024-12-06 07:11:17.030325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:44:44.545 [2024-12-06 07:11:17.030334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.545 [2024-12-06 07:11:17.032193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.545 [2024-12-06 07:11:17.032230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:44.545 [2024-12-06 07:11:17.032264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:44:44.545 [2024-12-06 07:11:17.032274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.545 [2024-12-06 07:11:17.047533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.545 [2024-12-06 07:11:17.047572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:44.545 [2024-12-06 07:11:17.047607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.230 ms 00:44:44.545 [2024-12-06 07:11:17.047618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.546 [2024-12-06 07:11:17.053186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.546 [2024-12-06 07:11:17.053218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:44.546 [2024-12-06 07:11:17.053249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.524 ms 00:44:44.546 [2024-12-06 07:11:17.053259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.546 [2024-12-06 07:11:17.078758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.546 [2024-12-06 07:11:17.078795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:44.546 [2024-12-06 07:11:17.078829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.420 ms 00:44:44.546 [2024-12-06 07:11:17.078839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.546 [2024-12-06 07:11:17.095248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.546 [2024-12-06 07:11:17.095285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:44.546 [2024-12-06 07:11:17.095322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.362 ms 00:44:44.546 [2024-12-06 07:11:17.095332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.546 [2024-12-06 07:11:17.095487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.546 [2024-12-06 07:11:17.095505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:44.546 [2024-12-06 07:11:17.095519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:44:44.546 [2024-12-06 07:11:17.095529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.546 [2024-12-06 07:11:17.121291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.546 [2024-12-06 07:11:17.121500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:44.546 [2024-12-06 07:11:17.121532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.738 ms 00:44:44.546 [2024-12-06 07:11:17.121544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.806 [2024-12-06 07:11:17.148356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.806 [2024-12-06 07:11:17.148597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:44.806 [2024-12-06 07:11:17.148630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.764 ms 00:44:44.806 [2024-12-06 07:11:17.148643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.806 [2024-12-06 07:11:17.173788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.806 [2024-12-06 07:11:17.173824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:44.806 [2024-12-06 07:11:17.173858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.092 ms 00:44:44.806 [2024-12-06 07:11:17.173868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.806 [2024-12-06 07:11:17.198308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.806 [2024-12-06 07:11:17.198344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:44.806 [2024-12-06 07:11:17.198360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.349 ms 00:44:44.806 [2024-12-06 07:11:17.198369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.806 [2024-12-06 07:11:17.198410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:44.806 [2024-12-06 07:11:17.198429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:44.806 [2024-12-06 07:11:17.198856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.198998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:44.807 [2024-12-06 07:11:17.199738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:44.807 [2024-12-06 07:11:17.199761] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3095e218-c1bf-4965-bf6a-aaea9793f2eb 00:44:44.807 [2024-12-06 07:11:17.199800] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:44.807 [2024-12-06 07:11:17.199821] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:44.807 [2024-12-06 07:11:17.199834] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:44.807 [2024-12-06 07:11:17.199845] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:44.807 [2024-12-06 07:11:17.199855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:44.807 [2024-12-06 07:11:17.199866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:44.807 [2024-12-06 07:11:17.199876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:44.807 [2024-12-06 07:11:17.199886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:44.807 [2024-12-06 07:11:17.199895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:44.807 [2024-12-06 07:11:17.199907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.807 [2024-12-06 07:11:17.199917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:44.807 [2024-12-06 07:11:17.199929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:44:44.807 [2024-12-06 07:11:17.199939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.807 [2024-12-06 07:11:17.213518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.807 [2024-12-06 07:11:17.213553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:44.807 [2024-12-06 07:11:17.213569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.524 ms 00:44:44.807 [2024-12-06 07:11:17.213579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.807 [2024-12-06 07:11:17.213999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.807 [2024-12-06 07:11:17.214017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:44.807 [2024-12-06 07:11:17.214035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:44:44.807 [2024-12-06 07:11:17.214085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.807 [2024-12-06 07:11:17.256375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:44.807 [2024-12-06 07:11:17.256438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:44.807 [2024-12-06 07:11:17.256474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:44.807 [2024-12-06 07:11:17.256484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.807 [2024-12-06 07:11:17.256545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:44.807 [2024-12-06 07:11:17.256559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:44.807 [2024-12-06 07:11:17.256572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:44.807 [2024-12-06 07:11:17.256581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.807 [2024-12-06 07:11:17.256691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:44.807 [2024-12-06 07:11:17.256711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:44.808 [2024-12-06 07:11:17.256757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:44.808 [2024-12-06 07:11:17.256770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.808 [2024-12-06 07:11:17.256799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:44.808 [2024-12-06 07:11:17.256811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:44.808 [2024-12-06 07:11:17.256823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:44.808 [2024-12-06 07:11:17.256832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.808 [2024-12-06 07:11:17.335499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:44.808 [2024-12-06 07:11:17.335554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:44.808 [2024-12-06 07:11:17.335573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:44.808 [2024-12-06 07:11:17.335583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:45.067 [2024-12-06 07:11:17.401270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:45.067 [2024-12-06 07:11:17.401412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:45.067 [2024-12-06 07:11:17.401523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:45.067 [2024-12-06 07:11:17.401669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:45.067 [2024-12-06 07:11:17.401804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:45.067 [2024-12-06 07:11:17.401902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.401913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.401966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.067 [2024-12-06 07:11:17.401982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:45.067 [2024-12-06 07:11:17.401994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.067 [2024-12-06 07:11:17.402003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.067 [2024-12-06 07:11:17.402204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.940 ms, result 0 00:44:45.067 true 00:44:45.067 07:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80904 00:44:45.067 07:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80904 00:44:45.067 07:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:44:45.067 [2024-12-06 07:11:17.511519] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:44:45.067 [2024-12-06 07:11:17.511635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81816 ] 00:44:45.326 [2024-12-06 07:11:17.674447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:45.326 [2024-12-06 07:11:17.753736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:46.704  [2024-12-06T07:11:20.233Z] Copying: 215/1024 [MB] (215 MBps) [2024-12-06T07:11:21.170Z] Copying: 424/1024 [MB] (209 MBps) [2024-12-06T07:11:22.108Z] Copying: 631/1024 [MB] (206 MBps) [2024-12-06T07:11:23.045Z] Copying: 843/1024 [MB] (212 MBps) [2024-12-06T07:11:23.983Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:44:51.392 00:44:51.392 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80904 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:44:51.392 07:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:51.392 [2024-12-06 07:11:23.735090] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:44:51.392 [2024-12-06 07:11:23.735210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81888 ] 00:44:51.392 [2024-12-06 07:11:23.894122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.392 [2024-12-06 07:11:23.972541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:51.651 [2024-12-06 07:11:24.229520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.651 [2024-12-06 07:11:24.229595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:51.911 [2024-12-06 07:11:24.294976] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:44:51.911 [2024-12-06 07:11:24.295452] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:44:51.911 [2024-12-06 07:11:24.295610] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:44:52.171 [2024-12-06 07:11:24.565403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.565447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:52.171 [2024-12-06 07:11:24.565465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:52.171 [2024-12-06 07:11:24.565479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.171 [2024-12-06 07:11:24.565532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.565548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:52.171 [2024-12-06 07:11:24.565558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:44:52.171 [2024-12-06 07:11:24.565567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.171 [2024-12-06 07:11:24.565592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:52.171 [2024-12-06 07:11:24.566370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:52.171 [2024-12-06 07:11:24.566393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.566404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:52.171 [2024-12-06 07:11:24.566414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:44:52.171 [2024-12-06 07:11:24.566433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.171 [2024-12-06 07:11:24.567485] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:52.171 [2024-12-06 07:11:24.581452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.581489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:52.171 [2024-12-06 07:11:24.581521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:44:52.171 [2024-12-06 07:11:24.581531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.171 [2024-12-06 07:11:24.581592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.581608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:52.171 [2024-12-06 07:11:24.581619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:44:52.171 [2024-12-06 07:11:24.581628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.171 [2024-12-06 07:11:24.585813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.171 [2024-12-06 07:11:24.585851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:52.172 [2024-12-06 07:11:24.585881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:44:52.172 [2024-12-06 07:11:24.585890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.585972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.585989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:52.172 [2024-12-06 07:11:24.586000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:44:52.172 [2024-12-06 07:11:24.586009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.586060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.586076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:52.172 [2024-12-06 07:11:24.586101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:52.172 [2024-12-06 07:11:24.586111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.586140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:52.172 [2024-12-06 07:11:24.590064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.590113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:52.172 [2024-12-06 07:11:24.590144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:44:52.172 [2024-12-06 07:11:24.590154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.590191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.590205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:52.172 [2024-12-06 07:11:24.590216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:52.172 [2024-12-06 07:11:24.590226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.590284] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:52.172 [2024-12-06 07:11:24.590315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:52.172 [2024-12-06 07:11:24.590355] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:52.172 [2024-12-06 07:11:24.590372] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:52.172 [2024-12-06 07:11:24.590484] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:52.172 [2024-12-06 07:11:24.590508] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:52.172 [2024-12-06 07:11:24.590530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:52.172 [2024-12-06 07:11:24.590553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:52.172 [2024-12-06 07:11:24.590565] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:52.172 [2024-12-06 07:11:24.590575] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:52.172 [2024-12-06 07:11:24.590585] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:52.172 [2024-12-06 07:11:24.590595] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:52.172 [2024-12-06 07:11:24.590604] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:52.172 [2024-12-06 07:11:24.590615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.590625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:52.172 [2024-12-06 07:11:24.590636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:44:52.172 [2024-12-06 07:11:24.590657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.590769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.172 [2024-12-06 07:11:24.590793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:52.172 [2024-12-06 07:11:24.590804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:44:52.172 [2024-12-06 07:11:24.590814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.172 [2024-12-06 07:11:24.590925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:52.172 [2024-12-06 07:11:24.590944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:52.172 [2024-12-06 07:11:24.590964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:52.172 [2024-12-06 07:11:24.590982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.590994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:52.172 [2024-12-06 07:11:24.591003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:52.172 [2024-12-06 07:11:24.591032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:52.172 [2024-12-06 07:11:24.591064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:52.172 [2024-12-06 07:11:24.591073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:52.172 [2024-12-06 07:11:24.591082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:52.172 [2024-12-06 07:11:24.591107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:52.172 [2024-12-06 07:11:24.591133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:52.172 [2024-12-06 07:11:24.591142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:52.172 [2024-12-06 07:11:24.591160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:52.172 [2024-12-06 07:11:24.591201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:52.172 [2024-12-06 07:11:24.591232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:52.172 [2024-12-06 07:11:24.591259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:52.172 [2024-12-06 07:11:24.591287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:52.172 [2024-12-06 07:11:24.591314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:52.172 [2024-12-06 07:11:24.591332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:52.172 [2024-12-06 07:11:24.591341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:52.172 [2024-12-06 07:11:24.591350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:52.172 [2024-12-06 07:11:24.591359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:52.172 [2024-12-06 07:11:24.591371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:52.172 [2024-12-06 07:11:24.591387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:52.172 [2024-12-06 07:11:24.591415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:52.172 [2024-12-06 07:11:24.591424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:52.172 [2024-12-06 07:11:24.591459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:52.172 [2024-12-06 07:11:24.591474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:52.172 [2024-12-06 07:11:24.591495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:52.172 [2024-12-06 07:11:24.591504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:52.172 [2024-12-06 07:11:24.591513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:52.172 [2024-12-06 07:11:24.591522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:52.172 [2024-12-06 07:11:24.591531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:52.172 [2024-12-06 07:11:24.591540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:52.172 [2024-12-06 07:11:24.591551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:52.172 [2024-12-06 07:11:24.591563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:52.172 [2024-12-06 07:11:24.591573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:52.172 [2024-12-06 07:11:24.591583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:52.172 [2024-12-06 07:11:24.591595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:52.172 [2024-12-06 07:11:24.591611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:52.172 [2024-12-06 07:11:24.591629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:52.172 [2024-12-06 07:11:24.591644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:52.173 [2024-12-06 07:11:24.591654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:52.173 [2024-12-06 07:11:24.591663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:52.173 [2024-12-06 07:11:24.591673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:52.173 [2024-12-06 07:11:24.591682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:52.173 [2024-12-06 07:11:24.591746] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:52.173 [2024-12-06 07:11:24.591757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:52.173 [2024-12-06 07:11:24.591796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:52.173 [2024-12-06 07:11:24.591806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:52.173 [2024-12-06 07:11:24.591816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:52.173 [2024-12-06 07:11:24.591827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.591840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:52.173 [2024-12-06 07:11:24.591857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:44:52.173 [2024-12-06 07:11:24.591879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.622581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.622660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:52.173 [2024-12-06 07:11:24.622697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.611 ms 00:44:52.173 [2024-12-06 07:11:24.622710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.622865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.622883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:52.173 [2024-12-06 07:11:24.622896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:52.173 [2024-12-06 07:11:24.622907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.696581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.696836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:52.173 [2024-12-06 07:11:24.696884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.574 ms 00:44:52.173 [2024-12-06 07:11:24.696900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.696986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.697007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:52.173 [2024-12-06 07:11:24.697023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:52.173 [2024-12-06 07:11:24.697037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.697550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.697574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:52.173 [2024-12-06 07:11:24.697590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:44:52.173 [2024-12-06 07:11:24.697613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.697868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.697894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:52.173 [2024-12-06 07:11:24.697909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:44:52.173 [2024-12-06 07:11:24.697923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.720007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.720335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:52.173 [2024-12-06 07:11:24.720373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.047 ms 00:44:52.173 [2024-12-06 07:11:24.720390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.173 [2024-12-06 07:11:24.742536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:52.173 [2024-12-06 07:11:24.742932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:52.173 [2024-12-06 07:11:24.743220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.173 [2024-12-06 07:11:24.743366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:52.173 [2024-12-06 07:11:24.743531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.584 ms 00:44:52.173 [2024-12-06 07:11:24.743736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.781831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.782075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:52.433 [2024-12-06 07:11:24.782219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.666 ms 00:44:52.433 [2024-12-06 07:11:24.782280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.802249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.802596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:52.433 [2024-12-06 07:11:24.802750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.724 ms 00:44:52.433 [2024-12-06 07:11:24.802893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.824707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.825119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:52.433 [2024-12-06 07:11:24.825262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.682 ms 00:44:52.433 [2024-12-06 07:11:24.825327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.826634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.826848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:52.433 [2024-12-06 07:11:24.826985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:44:52.433 [2024-12-06 07:11:24.827049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.898228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.898456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:52.433 [2024-12-06 07:11:24.898581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.006 ms 00:44:52.433 [2024-12-06 07:11:24.898628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.911826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:52.433 [2024-12-06 07:11:24.914815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.914857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:52.433 [2024-12-06 07:11:24.914878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.062 ms 00:44:52.433 [2024-12-06 07:11:24.914899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.915032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.915054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:52.433 [2024-12-06 07:11:24.915070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:52.433 [2024-12-06 07:11:24.915082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.915197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.915218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:52.433 [2024-12-06 07:11:24.915231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:44:52.433 [2024-12-06 07:11:24.915243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.915283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.915300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:52.433 [2024-12-06 07:11:24.915312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:52.433 [2024-12-06 07:11:24.915324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.915368] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:52.433 [2024-12-06 07:11:24.915387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.915399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:52.433 [2024-12-06 07:11:24.915412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:44:52.433 [2024-12-06 07:11:24.915428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.945683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.946000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:52.433 [2024-12-06 07:11:24.946114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.226 ms 00:44:52.433 [2024-12-06 07:11:24.946162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.946290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:52.433 [2024-12-06 07:11:24.946425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:52.433 [2024-12-06 07:11:24.946474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:44:52.433 [2024-12-06 07:11:24.946509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:52.433 [2024-12-06 07:11:24.947996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.003 ms, result 0 00:44:53.369  [2024-12-06T07:11:27.337Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T07:11:28.275Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-06T07:11:29.213Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-06T07:11:30.151Z] Copying: 92/1024 [MB] (22 MBps) [2024-12-06T07:11:31.089Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-06T07:11:32.024Z] Copying: 138/1024 [MB] (23 MBps) [2024-12-06T07:11:32.961Z] Copying: 161/1024 [MB] (22 MBps) [2024-12-06T07:11:34.338Z] Copying: 183/1024 [MB] (22 MBps) [2024-12-06T07:11:35.297Z] Copying: 206/1024 [MB] (22 MBps) [2024-12-06T07:11:36.230Z] Copying: 229/1024 [MB] (23 MBps) [2024-12-06T07:11:37.162Z] Copying: 252/1024 [MB] (22 MBps) [2024-12-06T07:11:38.098Z] Copying: 275/1024 [MB] (23 MBps) [2024-12-06T07:11:39.037Z] Copying: 299/1024 [MB] (23 MBps) [2024-12-06T07:11:39.975Z] Copying: 322/1024 [MB] (23 MBps) [2024-12-06T07:11:41.354Z] Copying: 345/1024 [MB] (22 MBps) [2024-12-06T07:11:42.292Z] Copying: 368/1024 [MB] (23 MBps) [2024-12-06T07:11:43.230Z] Copying: 391/1024 [MB] (23 MBps) [2024-12-06T07:11:44.167Z] Copying: 414/1024 [MB] (23 MBps) [2024-12-06T07:11:45.105Z] Copying: 437/1024 [MB] (22 MBps) [2024-12-06T07:11:46.041Z] Copying: 460/1024 [MB] (23 MBps) [2024-12-06T07:11:46.981Z] Copying: 484/1024 [MB] (23 MBps) [2024-12-06T07:11:48.361Z] Copying: 507/1024 [MB] (22 MBps) [2024-12-06T07:11:49.300Z] Copying: 530/1024 [MB] (23 MBps) [2024-12-06T07:11:50.244Z] Copying: 553/1024 [MB] (23 MBps) [2024-12-06T07:11:51.181Z] Copying: 576/1024 [MB] (23 MBps) [2024-12-06T07:11:52.118Z] Copying: 599/1024 [MB] (23 MBps) [2024-12-06T07:11:53.055Z] Copying: 622/1024 [MB] (23 MBps) [2024-12-06T07:11:53.990Z] Copying: 645/1024 [MB] (22 MBps) [2024-12-06T07:11:55.363Z] Copying: 668/1024 [MB] (22 MBps) [2024-12-06T07:11:56.297Z] Copying: 691/1024 [MB] (23 MBps) [2024-12-06T07:11:57.232Z] Copying: 714/1024 [MB] (23 MBps) [2024-12-06T07:11:58.168Z] Copying: 737/1024 [MB] (23 MBps) [2024-12-06T07:11:59.107Z] Copying: 760/1024 [MB] (23 MBps) [2024-12-06T07:12:00.040Z] Copying: 783/1024 [MB] (23 MBps) [2024-12-06T07:12:00.979Z] Copying: 807/1024 [MB] (23 MBps) [2024-12-06T07:12:02.358Z] Copying: 830/1024 [MB] (23 MBps) [2024-12-06T07:12:03.295Z] Copying: 854/1024 [MB] (23 MBps) [2024-12-06T07:12:04.232Z] Copying: 878/1024 [MB] (23 MBps) [2024-12-06T07:12:05.171Z] Copying: 902/1024 [MB] (24 MBps) [2024-12-06T07:12:06.117Z] Copying: 925/1024 [MB] (23 MBps) [2024-12-06T07:12:07.078Z] Copying: 948/1024 [MB] (23 MBps) [2024-12-06T07:12:08.015Z] Copying: 971/1024 [MB] (22 MBps) [2024-12-06T07:12:09.395Z] Copying: 994/1024 [MB] (23 MBps) [2024-12-06T07:12:10.331Z] Copying: 1017/1024 [MB] (22 MBps) [2024-12-06T07:12:10.331Z] Copying: 1048260/1048576 [kB] (6328 kBps) [2024-12-06T07:12:10.590Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 07:12:10.337483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.337806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:37.999 [2024-12-06 07:12:10.337850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:37.999 [2024-12-06 07:12:10.337881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.999 [2024-12-06 07:12:10.340878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:37.999 [2024-12-06 07:12:10.345283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.345322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:37.999 [2024-12-06 07:12:10.345338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:45:37.999 [2024-12-06 07:12:10.345358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.999 [2024-12-06 07:12:10.357153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.357194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:37.999 [2024-12-06 07:12:10.357211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.729 ms 00:45:37.999 [2024-12-06 07:12:10.357221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.999 [2024-12-06 07:12:10.377476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.377532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:37.999 [2024-12-06 07:12:10.377549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.236 ms 00:45:37.999 [2024-12-06 07:12:10.377560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.999 [2024-12-06 07:12:10.382949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.382983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:37.999 [2024-12-06 07:12:10.382996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:45:37.999 [2024-12-06 07:12:10.383005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:37.999 [2024-12-06 07:12:10.407589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:37.999 [2024-12-06 07:12:10.407627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:37.999 [2024-12-06 07:12:10.407641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.539 ms 00:45:37.999 [2024-12-06 07:12:10.407652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.000 [2024-12-06 07:12:10.422421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.000 [2024-12-06 07:12:10.422459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:38.000 [2024-12-06 07:12:10.422474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.732 ms 00:45:38.000 [2024-12-06 07:12:10.422483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.000 [2024-12-06 07:12:10.547208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.000 [2024-12-06 07:12:10.547438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:38.000 [2024-12-06 07:12:10.547487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.682 ms 00:45:38.000 [2024-12-06 07:12:10.547505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.000 [2024-12-06 07:12:10.585962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.000 [2024-12-06 07:12:10.586180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:38.000 [2024-12-06 07:12:10.586215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.422 ms 00:45:38.000 [2024-12-06 07:12:10.586253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.259 [2024-12-06 07:12:10.624502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.259 [2024-12-06 07:12:10.624558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:38.259 [2024-12-06 07:12:10.624581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.188 ms 00:45:38.259 [2024-12-06 07:12:10.624595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.259 [2024-12-06 07:12:10.652701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.259 [2024-12-06 07:12:10.652776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:38.259 [2024-12-06 07:12:10.652809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.051 ms 00:45:38.259 [2024-12-06 07:12:10.652819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.259 [2024-12-06 07:12:10.678054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.259 [2024-12-06 07:12:10.678091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:38.259 [2024-12-06 07:12:10.678121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.118 ms 00:45:38.259 [2024-12-06 07:12:10.678131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.259 [2024-12-06 07:12:10.678169] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:38.259 [2024-12-06 07:12:10.678190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:45:38.259 [2024-12-06 07:12:10.678202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:38.259 [2024-12-06 07:12:10.678484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.678993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:38.260 [2024-12-06 07:12:10.679288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:38.261 [2024-12-06 07:12:10.679298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:38.261 [2024-12-06 07:12:10.679317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:38.261 [2024-12-06 07:12:10.679328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3095e218-c1bf-4965-bf6a-aaea9793f2eb 00:45:38.261 [2024-12-06 07:12:10.679355] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:45:38.261 [2024-12-06 07:12:10.679366] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129728 00:45:38.261 [2024-12-06 07:12:10.679376] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:45:38.261 [2024-12-06 07:12:10.679387] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:45:38.261 [2024-12-06 07:12:10.679398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:38.261 [2024-12-06 07:12:10.679410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:38.261 [2024-12-06 07:12:10.679421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:38.261 [2024-12-06 07:12:10.679431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:38.261 [2024-12-06 07:12:10.679440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:38.261 [2024-12-06 07:12:10.679451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.261 [2024-12-06 07:12:10.679461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:38.261 [2024-12-06 07:12:10.679472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:45:38.261 [2024-12-06 07:12:10.679483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.693489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.261 [2024-12-06 07:12:10.693522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:38.261 [2024-12-06 07:12:10.693552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.983 ms 00:45:38.261 [2024-12-06 07:12:10.693562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.693994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:38.261 [2024-12-06 07:12:10.694035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:38.261 [2024-12-06 07:12:10.694071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:45:38.261 [2024-12-06 07:12:10.694114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.729334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.261 [2024-12-06 07:12:10.729374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:38.261 [2024-12-06 07:12:10.729404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.261 [2024-12-06 07:12:10.729414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.729476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.261 [2024-12-06 07:12:10.729490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:38.261 [2024-12-06 07:12:10.729503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.261 [2024-12-06 07:12:10.729512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.729583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.261 [2024-12-06 07:12:10.729601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:38.261 [2024-12-06 07:12:10.729612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.261 [2024-12-06 07:12:10.729621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.729640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.261 [2024-12-06 07:12:10.729652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:38.261 [2024-12-06 07:12:10.729662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.261 [2024-12-06 07:12:10.729672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.261 [2024-12-06 07:12:10.815670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.261 [2024-12-06 07:12:10.815781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:38.261 [2024-12-06 07:12:10.815806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.261 [2024-12-06 07:12:10.815817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.520 [2024-12-06 07:12:10.886020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.520 [2024-12-06 07:12:10.886068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:38.520 [2024-12-06 07:12:10.886100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.520 [2024-12-06 07:12:10.886116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.520 [2024-12-06 07:12:10.886212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.520 [2024-12-06 07:12:10.886229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:38.520 [2024-12-06 07:12:10.886239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.520 [2024-12-06 07:12:10.886249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.520 [2024-12-06 07:12:10.886289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.520 [2024-12-06 07:12:10.886303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:38.520 [2024-12-06 07:12:10.886313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.520 [2024-12-06 07:12:10.886322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.520 [2024-12-06 07:12:10.886426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.520 [2024-12-06 07:12:10.886445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:38.520 [2024-12-06 07:12:10.886455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.520 [2024-12-06 07:12:10.886465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.520 [2024-12-06 07:12:10.886507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.520 [2024-12-06 07:12:10.886522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:38.520 [2024-12-06 07:12:10.886532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.520 [2024-12-06 07:12:10.886542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.521 [2024-12-06 07:12:10.886588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.521 [2024-12-06 07:12:10.886601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:38.521 [2024-12-06 07:12:10.886611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.521 [2024-12-06 07:12:10.886620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.521 [2024-12-06 07:12:10.886667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:38.521 [2024-12-06 07:12:10.886681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:38.521 [2024-12-06 07:12:10.886691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:38.521 [2024-12-06 07:12:10.886701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:38.521 [2024-12-06 07:12:10.886913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.383 ms, result 0 00:45:39.896 00:45:39.896 00:45:39.896 07:12:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:45:41.272 07:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:41.531 [2024-12-06 07:12:13.950075] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:45:41.531 [2024-12-06 07:12:13.950271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82365 ] 00:45:41.790 [2024-12-06 07:12:14.136997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:41.790 [2024-12-06 07:12:14.261625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.048 [2024-12-06 07:12:14.528601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:42.048 [2024-12-06 07:12:14.528697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:42.307 [2024-12-06 07:12:14.684387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.684442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:42.307 [2024-12-06 07:12:14.684475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:42.307 [2024-12-06 07:12:14.684486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.684545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.684564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:42.307 [2024-12-06 07:12:14.684576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:45:42.307 [2024-12-06 07:12:14.684585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.684612] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:42.307 [2024-12-06 07:12:14.685421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:42.307 [2024-12-06 07:12:14.685450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.685462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:42.307 [2024-12-06 07:12:14.685473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:45:42.307 [2024-12-06 07:12:14.685483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.686740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:42.307 [2024-12-06 07:12:14.699601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.699639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:42.307 [2024-12-06 07:12:14.699654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.897 ms 00:45:42.307 [2024-12-06 07:12:14.699664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.699762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.699781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:42.307 [2024-12-06 07:12:14.699793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:45:42.307 [2024-12-06 07:12:14.699802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.704182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.704218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:42.307 [2024-12-06 07:12:14.704231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:45:42.307 [2024-12-06 07:12:14.704247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.704323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.704340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:42.307 [2024-12-06 07:12:14.704350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:45:42.307 [2024-12-06 07:12:14.704360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.704414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.704429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:42.307 [2024-12-06 07:12:14.704466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:42.307 [2024-12-06 07:12:14.704492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.704529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:42.307 [2024-12-06 07:12:14.708199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.708373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:42.307 [2024-12-06 07:12:14.708404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.678 ms 00:45:42.307 [2024-12-06 07:12:14.708416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.708505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.708524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:42.307 [2024-12-06 07:12:14.708536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:42.307 [2024-12-06 07:12:14.708547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.708591] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:42.307 [2024-12-06 07:12:14.708622] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:42.307 [2024-12-06 07:12:14.708663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:42.307 [2024-12-06 07:12:14.708701] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:42.307 [2024-12-06 07:12:14.708863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:42.307 [2024-12-06 07:12:14.708882] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:42.307 [2024-12-06 07:12:14.708910] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:42.307 [2024-12-06 07:12:14.708938] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:42.307 [2024-12-06 07:12:14.708950] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:42.307 [2024-12-06 07:12:14.708960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:42.307 [2024-12-06 07:12:14.708970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:42.307 [2024-12-06 07:12:14.708985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:42.307 [2024-12-06 07:12:14.708995] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:42.307 [2024-12-06 07:12:14.709005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.709015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:42.307 [2024-12-06 07:12:14.709025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:45:42.307 [2024-12-06 07:12:14.709034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.709112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.307 [2024-12-06 07:12:14.709125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:42.307 [2024-12-06 07:12:14.709135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:42.307 [2024-12-06 07:12:14.709159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.307 [2024-12-06 07:12:14.709269] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:42.307 [2024-12-06 07:12:14.709286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:42.307 [2024-12-06 07:12:14.709296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:42.307 [2024-12-06 07:12:14.709326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:42.307 [2024-12-06 07:12:14.709352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:42.307 [2024-12-06 07:12:14.709369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:42.307 [2024-12-06 07:12:14.709378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:42.307 [2024-12-06 07:12:14.709387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:42.307 [2024-12-06 07:12:14.709406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:42.307 [2024-12-06 07:12:14.709416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:42.307 [2024-12-06 07:12:14.709425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:42.307 [2024-12-06 07:12:14.709442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:42.307 [2024-12-06 07:12:14.709467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:42.307 [2024-12-06 07:12:14.709492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:42.307 [2024-12-06 07:12:14.709517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:42.307 [2024-12-06 07:12:14.709542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:42.307 [2024-12-06 07:12:14.709567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:42.307 [2024-12-06 07:12:14.709583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:42.307 [2024-12-06 07:12:14.709593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:42.307 [2024-12-06 07:12:14.709601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:42.307 [2024-12-06 07:12:14.709609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:42.307 [2024-12-06 07:12:14.709618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:42.307 [2024-12-06 07:12:14.709626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:42.307 [2024-12-06 07:12:14.709643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:42.307 [2024-12-06 07:12:14.709651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709660] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:42.307 [2024-12-06 07:12:14.709670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:42.307 [2024-12-06 07:12:14.709679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.307 [2024-12-06 07:12:14.709698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:42.307 [2024-12-06 07:12:14.709707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:42.307 [2024-12-06 07:12:14.709715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:42.307 [2024-12-06 07:12:14.709724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:42.307 [2024-12-06 07:12:14.709732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:42.307 [2024-12-06 07:12:14.709740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:42.307 [2024-12-06 07:12:14.709751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:42.307 [2024-12-06 07:12:14.709762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:42.307 [2024-12-06 07:12:14.709792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:42.307 [2024-12-06 07:12:14.709804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:42.307 [2024-12-06 07:12:14.709813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:42.307 [2024-12-06 07:12:14.709823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:42.307 [2024-12-06 07:12:14.709832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:42.307 [2024-12-06 07:12:14.709841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:42.307 [2024-12-06 07:12:14.709850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:42.307 [2024-12-06 07:12:14.709859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:42.307 [2024-12-06 07:12:14.709869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:42.307 [2024-12-06 07:12:14.709878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:42.307 [2024-12-06 07:12:14.709887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:42.307 [2024-12-06 07:12:14.709896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:42.307 [2024-12-06 07:12:14.709906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:42.308 [2024-12-06 07:12:14.709916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:42.308 [2024-12-06 07:12:14.709925] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:42.308 [2024-12-06 07:12:14.709935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:42.308 [2024-12-06 07:12:14.709946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:42.308 [2024-12-06 07:12:14.709956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:42.308 [2024-12-06 07:12:14.709965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:42.308 [2024-12-06 07:12:14.709974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:42.308 [2024-12-06 07:12:14.709985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.709994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:42.308 [2024-12-06 07:12:14.710004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:45:42.308 [2024-12-06 07:12:14.710013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.736493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.736546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:42.308 [2024-12-06 07:12:14.736579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.418 ms 00:45:42.308 [2024-12-06 07:12:14.736595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.736703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.736717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:42.308 [2024-12-06 07:12:14.736781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:45:42.308 [2024-12-06 07:12:14.736807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.781754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.781824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:42.308 [2024-12-06 07:12:14.781858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.840 ms 00:45:42.308 [2024-12-06 07:12:14.781868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.781922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.781938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:42.308 [2024-12-06 07:12:14.781955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:42.308 [2024-12-06 07:12:14.781966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.782340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.782357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:42.308 [2024-12-06 07:12:14.782368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:45:42.308 [2024-12-06 07:12:14.782388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.782518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.782535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:42.308 [2024-12-06 07:12:14.782552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:45:42.308 [2024-12-06 07:12:14.782561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.796256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.796296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:42.308 [2024-12-06 07:12:14.796312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.672 ms 00:45:42.308 [2024-12-06 07:12:14.796322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.809951] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:45:42.308 [2024-12-06 07:12:14.809988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:42.308 [2024-12-06 07:12:14.810021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.810033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:42.308 [2024-12-06 07:12:14.810044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.561 ms 00:45:42.308 [2024-12-06 07:12:14.810053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.835021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.835059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:42.308 [2024-12-06 07:12:14.835074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:45:42.308 [2024-12-06 07:12:14.835084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.847517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.847554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:42.308 [2024-12-06 07:12:14.847569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:45:42.308 [2024-12-06 07:12:14.847578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.859894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.859931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:42.308 [2024-12-06 07:12:14.859945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.279 ms 00:45:42.308 [2024-12-06 07:12:14.859955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.308 [2024-12-06 07:12:14.860686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.308 [2024-12-06 07:12:14.860738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:42.308 [2024-12-06 07:12:14.860774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:45:42.308 [2024-12-06 07:12:14.860785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.924058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.924142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:42.566 [2024-12-06 07:12:14.924166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.249 ms 00:45:42.566 [2024-12-06 07:12:14.924177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.935023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:42.566 [2024-12-06 07:12:14.937228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.937397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:42.566 [2024-12-06 07:12:14.937422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.995 ms 00:45:42.566 [2024-12-06 07:12:14.937433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.937553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.937572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:42.566 [2024-12-06 07:12:14.937588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:42.566 [2024-12-06 07:12:14.937599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.939250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.939282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:42.566 [2024-12-06 07:12:14.939295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:45:42.566 [2024-12-06 07:12:14.939304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.939336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.939349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:42.566 [2024-12-06 07:12:14.939359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:42.566 [2024-12-06 07:12:14.939369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.566 [2024-12-06 07:12:14.939409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:42.566 [2024-12-06 07:12:14.939424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.566 [2024-12-06 07:12:14.939433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:42.566 [2024-12-06 07:12:14.939443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:45:42.566 [2024-12-06 07:12:14.939452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.567 [2024-12-06 07:12:14.964243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.567 [2024-12-06 07:12:14.964282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:42.567 [2024-12-06 07:12:14.964303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.771 ms 00:45:42.567 [2024-12-06 07:12:14.964313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.567 [2024-12-06 07:12:14.964381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.567 [2024-12-06 07:12:14.964397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:42.567 [2024-12-06 07:12:14.964407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:45:42.567 [2024-12-06 07:12:14.964416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.567 [2024-12-06 07:12:14.965735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.789 ms, result 0 00:45:43.942  [2024-12-06T07:12:17.469Z] Copying: 968/1048576 [kB] (968 kBps) [2024-12-06T07:12:18.406Z] Copying: 5372/1048576 [kB] (4404 kBps) [2024-12-06T07:12:19.342Z] Copying: 30/1024 [MB] (25 MBps) [2024-12-06T07:12:20.278Z] Copying: 58/1024 [MB] (28 MBps) [2024-12-06T07:12:21.214Z] Copying: 86/1024 [MB] (27 MBps) [2024-12-06T07:12:22.150Z] Copying: 114/1024 [MB] (27 MBps) [2024-12-06T07:12:23.526Z] Copying: 141/1024 [MB] (27 MBps) [2024-12-06T07:12:24.459Z] Copying: 169/1024 [MB] (27 MBps) [2024-12-06T07:12:25.396Z] Copying: 197/1024 [MB] (27 MBps) [2024-12-06T07:12:26.334Z] Copying: 224/1024 [MB] (27 MBps) [2024-12-06T07:12:27.270Z] Copying: 251/1024 [MB] (26 MBps) [2024-12-06T07:12:28.207Z] Copying: 279/1024 [MB] (28 MBps) [2024-12-06T07:12:29.588Z] Copying: 308/1024 [MB] (28 MBps) [2024-12-06T07:12:30.157Z] Copying: 335/1024 [MB] (27 MBps) [2024-12-06T07:12:31.538Z] Copying: 364/1024 [MB] (28 MBps) [2024-12-06T07:12:32.477Z] Copying: 392/1024 [MB] (28 MBps) [2024-12-06T07:12:33.415Z] Copying: 421/1024 [MB] (28 MBps) [2024-12-06T07:12:34.353Z] Copying: 450/1024 [MB] (28 MBps) [2024-12-06T07:12:35.294Z] Copying: 478/1024 [MB] (28 MBps) [2024-12-06T07:12:36.233Z] Copying: 507/1024 [MB] (28 MBps) [2024-12-06T07:12:37.214Z] Copying: 535/1024 [MB] (28 MBps) [2024-12-06T07:12:38.151Z] Copying: 563/1024 [MB] (28 MBps) [2024-12-06T07:12:39.530Z] Copying: 592/1024 [MB] (28 MBps) [2024-12-06T07:12:40.466Z] Copying: 620/1024 [MB] (28 MBps) [2024-12-06T07:12:41.404Z] Copying: 648/1024 [MB] (28 MBps) [2024-12-06T07:12:42.342Z] Copying: 675/1024 [MB] (26 MBps) [2024-12-06T07:12:43.285Z] Copying: 703/1024 [MB] (28 MBps) [2024-12-06T07:12:44.222Z] Copying: 731/1024 [MB] (28 MBps) [2024-12-06T07:12:45.160Z] Copying: 759/1024 [MB] (28 MBps) [2024-12-06T07:12:46.534Z] Copying: 788/1024 [MB] (28 MBps) [2024-12-06T07:12:47.469Z] Copying: 816/1024 [MB] (28 MBps) [2024-12-06T07:12:48.405Z] Copying: 845/1024 [MB] (28 MBps) [2024-12-06T07:12:49.355Z] Copying: 873/1024 [MB] (28 MBps) [2024-12-06T07:12:50.289Z] Copying: 901/1024 [MB] (27 MBps) [2024-12-06T07:12:51.225Z] Copying: 928/1024 [MB] (27 MBps) [2024-12-06T07:12:52.159Z] Copying: 956/1024 [MB] (27 MBps) [2024-12-06T07:12:53.539Z] Copying: 985/1024 [MB] (28 MBps) [2024-12-06T07:12:53.539Z] Copying: 1013/1024 [MB] (28 MBps) [2024-12-06T07:12:54.478Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 07:12:54.390097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.390196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:21.887 [2024-12-06 07:12:54.390233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:21.887 [2024-12-06 07:12:54.390245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.390273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:21.887 [2024-12-06 07:12:54.393656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.393687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:21.887 [2024-12-06 07:12:54.393718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:46:21.887 [2024-12-06 07:12:54.393738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.393947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.393971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:21.887 [2024-12-06 07:12:54.393983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:46:21.887 [2024-12-06 07:12:54.393993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.406218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.406269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:21.887 [2024-12-06 07:12:54.406288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.205 ms 00:46:21.887 [2024-12-06 07:12:54.406300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.413191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.413224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:21.887 [2024-12-06 07:12:54.413264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:46:21.887 [2024-12-06 07:12:54.413275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.444813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.444880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:21.887 [2024-12-06 07:12:54.444915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.464 ms 00:46:21.887 [2024-12-06 07:12:54.444924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.459625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.459871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:21.887 [2024-12-06 07:12:54.459898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.655 ms 00:46:21.887 [2024-12-06 07:12:54.459910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.887 [2024-12-06 07:12:54.461935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.887 [2024-12-06 07:12:54.461973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:21.887 [2024-12-06 07:12:54.462005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.977 ms 00:46:21.887 [2024-12-06 07:12:54.462039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.146 [2024-12-06 07:12:54.489229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.146 [2024-12-06 07:12:54.489414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:22.146 [2024-12-06 07:12:54.489455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.171 ms 00:46:22.146 [2024-12-06 07:12:54.489467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.146 [2024-12-06 07:12:54.515030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.146 [2024-12-06 07:12:54.515231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:22.146 [2024-12-06 07:12:54.515255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.515 ms 00:46:22.146 [2024-12-06 07:12:54.515266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.146 [2024-12-06 07:12:54.540209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.146 [2024-12-06 07:12:54.540245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:22.146 [2024-12-06 07:12:54.540276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.899 ms 00:46:22.146 [2024-12-06 07:12:54.540286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.146 [2024-12-06 07:12:54.565240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.146 [2024-12-06 07:12:54.565275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:22.146 [2024-12-06 07:12:54.565306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.895 ms 00:46:22.146 [2024-12-06 07:12:54.565315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.146 [2024-12-06 07:12:54.565352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:22.146 [2024-12-06 07:12:54.565372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:46:22.146 [2024-12-06 07:12:54.565385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:46:22.146 [2024-12-06 07:12:54.565395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:22.146 [2024-12-06 07:12:54.565627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.565992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:22.147 [2024-12-06 07:12:54.566548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:22.147 [2024-12-06 07:12:54.566559] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3095e218-c1bf-4965-bf6a-aaea9793f2eb 00:46:22.147 [2024-12-06 07:12:54.566570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:46:22.147 [2024-12-06 07:12:54.566580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:46:22.147 [2024-12-06 07:12:54.566595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:46:22.147 [2024-12-06 07:12:54.566607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:46:22.147 [2024-12-06 07:12:54.566617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:22.147 [2024-12-06 07:12:54.566639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:22.147 [2024-12-06 07:12:54.566650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:22.147 [2024-12-06 07:12:54.566660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:22.147 [2024-12-06 07:12:54.566669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:22.147 [2024-12-06 07:12:54.566679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.147 [2024-12-06 07:12:54.566690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:22.147 [2024-12-06 07:12:54.566701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:46:22.147 [2024-12-06 07:12:54.566712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.580725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.147 [2024-12-06 07:12:54.580770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:22.147 [2024-12-06 07:12:54.580816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.991 ms 00:46:22.147 [2024-12-06 07:12:54.580826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.581210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:22.147 [2024-12-06 07:12:54.581226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:22.147 [2024-12-06 07:12:54.581237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:46:22.147 [2024-12-06 07:12:54.581247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.616015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.147 [2024-12-06 07:12:54.616202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:22.147 [2024-12-06 07:12:54.616244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.147 [2024-12-06 07:12:54.616256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.616312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.147 [2024-12-06 07:12:54.616326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:22.147 [2024-12-06 07:12:54.616337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.147 [2024-12-06 07:12:54.616347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.616450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.147 [2024-12-06 07:12:54.616513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:22.147 [2024-12-06 07:12:54.616526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.147 [2024-12-06 07:12:54.616536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.616573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.147 [2024-12-06 07:12:54.616587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:22.147 [2024-12-06 07:12:54.616598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.147 [2024-12-06 07:12:54.616608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.147 [2024-12-06 07:12:54.694858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.147 [2024-12-06 07:12:54.694913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:22.147 [2024-12-06 07:12:54.694929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.147 [2024-12-06 07:12:54.694939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:22.406 [2024-12-06 07:12:54.761374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:22.406 [2024-12-06 07:12:54.761489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:22.406 [2024-12-06 07:12:54.761586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:22.406 [2024-12-06 07:12:54.761795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:22.406 [2024-12-06 07:12:54.761909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.761958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.761971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:22.406 [2024-12-06 07:12:54.761986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.761995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.762040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:22.406 [2024-12-06 07:12:54.762054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:22.406 [2024-12-06 07:12:54.762080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:22.406 [2024-12-06 07:12:54.762090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:22.406 [2024-12-06 07:12:54.762248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.125 ms, result 0 00:46:22.972 00:46:22.972 00:46:22.972 07:12:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:24.875 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:46:24.875 07:12:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:24.875 [2024-12-06 07:12:57.335255] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:46:24.875 [2024-12-06 07:12:57.335374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82788 ] 00:46:25.134 [2024-12-06 07:12:57.505049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:25.134 [2024-12-06 07:12:57.621010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:25.394 [2024-12-06 07:12:57.880617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:25.394 [2024-12-06 07:12:57.881013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:25.654 [2024-12-06 07:12:58.037809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.037855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:25.654 [2024-12-06 07:12:58.037888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:25.654 [2024-12-06 07:12:58.037898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.654 [2024-12-06 07:12:58.037955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.037974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:25.654 [2024-12-06 07:12:58.037984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:46:25.654 [2024-12-06 07:12:58.037994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.654 [2024-12-06 07:12:58.038021] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:25.654 [2024-12-06 07:12:58.038779] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:25.654 [2024-12-06 07:12:58.038803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.038829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:25.654 [2024-12-06 07:12:58.038840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:46:25.654 [2024-12-06 07:12:58.038850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.654 [2024-12-06 07:12:58.039972] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:25.654 [2024-12-06 07:12:58.052960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.052997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:25.654 [2024-12-06 07:12:58.053011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:46:25.654 [2024-12-06 07:12:58.053020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.654 [2024-12-06 07:12:58.053084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.053100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:25.654 [2024-12-06 07:12:58.053110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:46:25.654 [2024-12-06 07:12:58.053118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.654 [2024-12-06 07:12:58.057307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.654 [2024-12-06 07:12:58.057341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:25.655 [2024-12-06 07:12:58.057354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:46:25.655 [2024-12-06 07:12:58.057368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.057441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.057456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:25.655 [2024-12-06 07:12:58.057466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:46:25.655 [2024-12-06 07:12:58.057474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.057527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.057541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:25.655 [2024-12-06 07:12:58.057551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:25.655 [2024-12-06 07:12:58.057559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.057591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:25.655 [2024-12-06 07:12:58.061217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.061249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:25.655 [2024-12-06 07:12:58.061265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:46:25.655 [2024-12-06 07:12:58.061274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.061308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.061321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:25.655 [2024-12-06 07:12:58.061330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:25.655 [2024-12-06 07:12:58.061339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.061362] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:25.655 [2024-12-06 07:12:58.061385] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:25.655 [2024-12-06 07:12:58.061418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:25.655 [2024-12-06 07:12:58.061437] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:25.655 [2024-12-06 07:12:58.061523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:25.655 [2024-12-06 07:12:58.061535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:25.655 [2024-12-06 07:12:58.061546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:25.655 [2024-12-06 07:12:58.061558] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:25.655 [2024-12-06 07:12:58.061568] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:25.655 [2024-12-06 07:12:58.061578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:25.655 [2024-12-06 07:12:58.061586] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:25.655 [2024-12-06 07:12:58.061597] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:25.655 [2024-12-06 07:12:58.061605] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:25.655 [2024-12-06 07:12:58.061615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.061623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:25.655 [2024-12-06 07:12:58.061633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:46:25.655 [2024-12-06 07:12:58.061641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.061745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.655 [2024-12-06 07:12:58.061760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:25.655 [2024-12-06 07:12:58.061771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:46:25.655 [2024-12-06 07:12:58.061779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.655 [2024-12-06 07:12:58.061918] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:25.655 [2024-12-06 07:12:58.061938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:25.655 [2024-12-06 07:12:58.061949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:25.655 [2024-12-06 07:12:58.061958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.655 [2024-12-06 07:12:58.061968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:25.655 [2024-12-06 07:12:58.061976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:25.655 [2024-12-06 07:12:58.061985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:25.655 [2024-12-06 07:12:58.061993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:25.655 [2024-12-06 07:12:58.062002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:25.655 [2024-12-06 07:12:58.062018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:25.655 [2024-12-06 07:12:58.062026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:25.655 [2024-12-06 07:12:58.062035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:25.655 [2024-12-06 07:12:58.062069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:25.655 [2024-12-06 07:12:58.062093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:25.655 [2024-12-06 07:12:58.062102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:25.655 [2024-12-06 07:12:58.062135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:25.655 [2024-12-06 07:12:58.062143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:25.655 [2024-12-06 07:12:58.062177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:25.655 [2024-12-06 07:12:58.062194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:25.655 [2024-12-06 07:12:58.062203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:25.655 [2024-12-06 07:12:58.062220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:25.655 [2024-12-06 07:12:58.062229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:25.655 [2024-12-06 07:12:58.062246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:25.655 [2024-12-06 07:12:58.062255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:25.655 [2024-12-06 07:12:58.062272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:25.655 [2024-12-06 07:12:58.062281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:25.655 [2024-12-06 07:12:58.062299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:25.655 [2024-12-06 07:12:58.062308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:25.655 [2024-12-06 07:12:58.062316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:25.655 [2024-12-06 07:12:58.062325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:25.655 [2024-12-06 07:12:58.062334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:25.655 [2024-12-06 07:12:58.062342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.655 [2024-12-06 07:12:58.062351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:25.656 [2024-12-06 07:12:58.062359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:25.656 [2024-12-06 07:12:58.062368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.656 [2024-12-06 07:12:58.062377] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:25.656 [2024-12-06 07:12:58.062387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:25.656 [2024-12-06 07:12:58.062396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:25.656 [2024-12-06 07:12:58.062405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:25.656 [2024-12-06 07:12:58.062415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:25.656 [2024-12-06 07:12:58.062424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:25.656 [2024-12-06 07:12:58.062433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:25.656 [2024-12-06 07:12:58.062442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:25.656 [2024-12-06 07:12:58.062451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:25.656 [2024-12-06 07:12:58.062460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:25.656 [2024-12-06 07:12:58.062470] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:25.656 [2024-12-06 07:12:58.062483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:25.656 [2024-12-06 07:12:58.062509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:25.656 [2024-12-06 07:12:58.062518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:25.656 [2024-12-06 07:12:58.062528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:25.656 [2024-12-06 07:12:58.062538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:25.656 [2024-12-06 07:12:58.062548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:25.656 [2024-12-06 07:12:58.062557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:25.656 [2024-12-06 07:12:58.062567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:25.656 [2024-12-06 07:12:58.062576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:25.656 [2024-12-06 07:12:58.062585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:25.656 [2024-12-06 07:12:58.062633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:25.656 [2024-12-06 07:12:58.062644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:25.656 [2024-12-06 07:12:58.062665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:25.656 [2024-12-06 07:12:58.062674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:25.656 [2024-12-06 07:12:58.062684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:25.656 [2024-12-06 07:12:58.062694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.062706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:25.656 [2024-12-06 07:12:58.062716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:46:25.656 [2024-12-06 07:12:58.062726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.094653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.094953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:25.656 [2024-12-06 07:12:58.094982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.871 ms 00:46:25.656 [2024-12-06 07:12:58.095000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.095118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.095132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:25.656 [2024-12-06 07:12:58.095143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:46:25.656 [2024-12-06 07:12:58.095152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.134739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.134782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:25.656 [2024-12-06 07:12:58.134814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.499 ms 00:46:25.656 [2024-12-06 07:12:58.134824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.134873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.134888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:25.656 [2024-12-06 07:12:58.134903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:25.656 [2024-12-06 07:12:58.134912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.135317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.135341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:25.656 [2024-12-06 07:12:58.135353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:46:25.656 [2024-12-06 07:12:58.135363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.135521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.135553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:25.656 [2024-12-06 07:12:58.135569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:46:25.656 [2024-12-06 07:12:58.135578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.149122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.149160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:25.656 [2024-12-06 07:12:58.149174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.521 ms 00:46:25.656 [2024-12-06 07:12:58.149183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.656 [2024-12-06 07:12:58.162044] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:25.656 [2024-12-06 07:12:58.162215] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:25.656 [2024-12-06 07:12:58.162237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.656 [2024-12-06 07:12:58.162247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:25.656 [2024-12-06 07:12:58.162259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.952 ms 00:46:25.657 [2024-12-06 07:12:58.162269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.657 [2024-12-06 07:12:58.186445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.657 [2024-12-06 07:12:58.186485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:25.657 [2024-12-06 07:12:58.186515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.127 ms 00:46:25.657 [2024-12-06 07:12:58.186525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.657 [2024-12-06 07:12:58.200784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.657 [2024-12-06 07:12:58.200837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:25.657 [2024-12-06 07:12:58.200852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.202 ms 00:46:25.657 [2024-12-06 07:12:58.200861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.657 [2024-12-06 07:12:58.214115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.657 [2024-12-06 07:12:58.214150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:25.657 [2024-12-06 07:12:58.214180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.200 ms 00:46:25.657 [2024-12-06 07:12:58.214189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.657 [2024-12-06 07:12:58.214897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.657 [2024-12-06 07:12:58.215041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:25.657 [2024-12-06 07:12:58.215071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:46:25.657 [2024-12-06 07:12:58.215082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.274869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.274928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:25.916 [2024-12-06 07:12:58.274967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.759 ms 00:46:25.916 [2024-12-06 07:12:58.274978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.285261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:25.916 [2024-12-06 07:12:58.287247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.287276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:25.916 [2024-12-06 07:12:58.287305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.211 ms 00:46:25.916 [2024-12-06 07:12:58.287314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.287424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.287442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:25.916 [2024-12-06 07:12:58.287456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:25.916 [2024-12-06 07:12:58.287465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.288121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.288162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:25.916 [2024-12-06 07:12:58.288175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:46:25.916 [2024-12-06 07:12:58.288185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.288212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.288225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:25.916 [2024-12-06 07:12:58.288235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:25.916 [2024-12-06 07:12:58.288246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.288290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:25.916 [2024-12-06 07:12:58.288306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.288316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:25.916 [2024-12-06 07:12:58.288326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:46:25.916 [2024-12-06 07:12:58.288336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.313329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.313366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:25.916 [2024-12-06 07:12:58.313402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.971 ms 00:46:25.916 [2024-12-06 07:12:58.313412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.916 [2024-12-06 07:12:58.313480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:25.916 [2024-12-06 07:12:58.313496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:25.916 [2024-12-06 07:12:58.313506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:46:25.916 [2024-12-06 07:12:58.313515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:25.917 [2024-12-06 07:12:58.314850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.461 ms, result 0 00:46:27.296  [2024-12-06T07:13:00.827Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T07:13:01.766Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-06T07:13:02.701Z] Copying: 69/1024 [MB] (22 MBps) [2024-12-06T07:13:03.638Z] Copying: 92/1024 [MB] (23 MBps) [2024-12-06T07:13:04.574Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-06T07:13:05.509Z] Copying: 137/1024 [MB] (22 MBps) [2024-12-06T07:13:06.884Z] Copying: 160/1024 [MB] (22 MBps) [2024-12-06T07:13:07.821Z] Copying: 183/1024 [MB] (22 MBps) [2024-12-06T07:13:08.789Z] Copying: 206/1024 [MB] (22 MBps) [2024-12-06T07:13:09.738Z] Copying: 229/1024 [MB] (23 MBps) [2024-12-06T07:13:10.677Z] Copying: 252/1024 [MB] (23 MBps) [2024-12-06T07:13:11.616Z] Copying: 275/1024 [MB] (23 MBps) [2024-12-06T07:13:12.554Z] Copying: 299/1024 [MB] (23 MBps) [2024-12-06T07:13:13.495Z] Copying: 321/1024 [MB] (22 MBps) [2024-12-06T07:13:14.875Z] Copying: 344/1024 [MB] (22 MBps) [2024-12-06T07:13:15.811Z] Copying: 367/1024 [MB] (23 MBps) [2024-12-06T07:13:16.749Z] Copying: 391/1024 [MB] (23 MBps) [2024-12-06T07:13:17.688Z] Copying: 413/1024 [MB] (22 MBps) [2024-12-06T07:13:18.627Z] Copying: 436/1024 [MB] (22 MBps) [2024-12-06T07:13:19.576Z] Copying: 459/1024 [MB] (22 MBps) [2024-12-06T07:13:20.512Z] Copying: 482/1024 [MB] (22 MBps) [2024-12-06T07:13:21.501Z] Copying: 504/1024 [MB] (22 MBps) [2024-12-06T07:13:22.877Z] Copying: 527/1024 [MB] (22 MBps) [2024-12-06T07:13:23.813Z] Copying: 550/1024 [MB] (22 MBps) [2024-12-06T07:13:24.750Z] Copying: 573/1024 [MB] (23 MBps) [2024-12-06T07:13:25.689Z] Copying: 596/1024 [MB] (23 MBps) [2024-12-06T07:13:26.624Z] Copying: 620/1024 [MB] (23 MBps) [2024-12-06T07:13:27.563Z] Copying: 643/1024 [MB] (23 MBps) [2024-12-06T07:13:28.500Z] Copying: 666/1024 [MB] (23 MBps) [2024-12-06T07:13:29.881Z] Copying: 689/1024 [MB] (22 MBps) [2024-12-06T07:13:30.819Z] Copying: 711/1024 [MB] (22 MBps) [2024-12-06T07:13:31.758Z] Copying: 733/1024 [MB] (22 MBps) [2024-12-06T07:13:32.696Z] Copying: 756/1024 [MB] (22 MBps) [2024-12-06T07:13:33.635Z] Copying: 778/1024 [MB] (22 MBps) [2024-12-06T07:13:34.574Z] Copying: 801/1024 [MB] (22 MBps) [2024-12-06T07:13:35.510Z] Copying: 824/1024 [MB] (22 MBps) [2024-12-06T07:13:36.888Z] Copying: 846/1024 [MB] (22 MBps) [2024-12-06T07:13:37.826Z] Copying: 869/1024 [MB] (22 MBps) [2024-12-06T07:13:38.764Z] Copying: 891/1024 [MB] (22 MBps) [2024-12-06T07:13:39.702Z] Copying: 914/1024 [MB] (22 MBps) [2024-12-06T07:13:40.688Z] Copying: 936/1024 [MB] (22 MBps) [2024-12-06T07:13:41.628Z] Copying: 959/1024 [MB] (22 MBps) [2024-12-06T07:13:42.568Z] Copying: 981/1024 [MB] (21 MBps) [2024-12-06T07:13:43.507Z] Copying: 1003/1024 [MB] (22 MBps) [2024-12-06T07:13:43.507Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 07:13:43.455693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.916 [2024-12-06 07:13:43.455848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:10.916 [2024-12-06 07:13:43.455911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:10.916 [2024-12-06 07:13:43.455935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.916 [2024-12-06 07:13:43.455981] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:10.916 [2024-12-06 07:13:43.461590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.916 [2024-12-06 07:13:43.461644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:10.916 [2024-12-06 07:13:43.461663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.580 ms 00:47:10.916 [2024-12-06 07:13:43.461677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.916 [2024-12-06 07:13:43.462001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.916 [2024-12-06 07:13:43.462024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:10.916 [2024-12-06 07:13:43.462039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:47:10.916 [2024-12-06 07:13:43.462052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.916 [2024-12-06 07:13:43.467623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.916 [2024-12-06 07:13:43.467842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:10.916 [2024-12-06 07:13:43.467878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.547 ms 00:47:10.916 [2024-12-06 07:13:43.467901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.916 [2024-12-06 07:13:43.476894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.916 [2024-12-06 07:13:43.476940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:10.916 [2024-12-06 07:13:43.476961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.956 ms 00:47:10.916 [2024-12-06 07:13:43.476975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.515333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.515538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:11.176 [2024-12-06 07:13:43.515573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.262 ms 00:47:11.176 [2024-12-06 07:13:43.515588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.536639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.536692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:11.176 [2024-12-06 07:13:43.536732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.014 ms 00:47:11.176 [2024-12-06 07:13:43.536749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.538596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.538817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:11.176 [2024-12-06 07:13:43.538851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:47:11.176 [2024-12-06 07:13:43.538866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.577066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.577119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:11.176 [2024-12-06 07:13:43.577141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.166 ms 00:47:11.176 [2024-12-06 07:13:43.577154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.615079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.615131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:11.176 [2024-12-06 07:13:43.615152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.893 ms 00:47:11.176 [2024-12-06 07:13:43.615166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.639546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.639582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:11.176 [2024-12-06 07:13:43.639596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.348 ms 00:47:11.176 [2024-12-06 07:13:43.639604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.664114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.176 [2024-12-06 07:13:43.664159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:11.176 [2024-12-06 07:13:43.664174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.459 ms 00:47:11.176 [2024-12-06 07:13:43.664183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.176 [2024-12-06 07:13:43.664204] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:11.176 [2024-12-06 07:13:43.664226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:11.176 [2024-12-06 07:13:43.664240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:47:11.176 [2024-12-06 07:13:43.664250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:11.176 [2024-12-06 07:13:43.664630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.664999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:11.177 [2024-12-06 07:13:43.665244] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:11.177 [2024-12-06 07:13:43.665253] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3095e218-c1bf-4965-bf6a-aaea9793f2eb 00:47:11.177 [2024-12-06 07:13:43.665261] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:47:11.177 [2024-12-06 07:13:43.665269] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:11.177 [2024-12-06 07:13:43.665278] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:11.177 [2024-12-06 07:13:43.665287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:11.177 [2024-12-06 07:13:43.665322] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:11.177 [2024-12-06 07:13:43.665332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:11.177 [2024-12-06 07:13:43.665340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:11.177 [2024-12-06 07:13:43.665348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:11.177 [2024-12-06 07:13:43.665356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:11.177 [2024-12-06 07:13:43.665364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.177 [2024-12-06 07:13:43.665374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:11.177 [2024-12-06 07:13:43.665383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:47:11.177 [2024-12-06 07:13:43.665395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.678960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.177 [2024-12-06 07:13:43.679118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:11.177 [2024-12-06 07:13:43.679142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.531 ms 00:47:11.177 [2024-12-06 07:13:43.679152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.679549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:11.177 [2024-12-06 07:13:43.679580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:11.177 [2024-12-06 07:13:43.679592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:47:11.177 [2024-12-06 07:13:43.679601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.713016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.177 [2024-12-06 07:13:43.713228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:11.177 [2024-12-06 07:13:43.713251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.177 [2024-12-06 07:13:43.713262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.713317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.177 [2024-12-06 07:13:43.713337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:11.177 [2024-12-06 07:13:43.713347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.177 [2024-12-06 07:13:43.713357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.713429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.177 [2024-12-06 07:13:43.713447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:11.177 [2024-12-06 07:13:43.713458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.177 [2024-12-06 07:13:43.713468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.177 [2024-12-06 07:13:43.713487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.177 [2024-12-06 07:13:43.713498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:11.177 [2024-12-06 07:13:43.713514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.177 [2024-12-06 07:13:43.713523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.793475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.793531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:11.437 [2024-12-06 07:13:43.793547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.793556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.859617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.859890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:11.437 [2024-12-06 07:13:43.859917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.859928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:11.437 [2024-12-06 07:13:43.860059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:11.437 [2024-12-06 07:13:43.860140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:11.437 [2024-12-06 07:13:43.860337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:11.437 [2024-12-06 07:13:43.860413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:11.437 [2024-12-06 07:13:43.860563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:11.437 [2024-12-06 07:13:43.860634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:11.437 [2024-12-06 07:13:43.860644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:11.437 [2024-12-06 07:13:43.860659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:11.437 [2024-12-06 07:13:43.860812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.092 ms, result 0 00:47:12.375 00:47:12.375 00:47:12.375 07:13:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:47:14.294 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80904 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80904 ']' 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80904 00:47:14.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80904) - No such process 00:47:14.294 Process with pid 80904 is not found 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80904 is not found' 00:47:14.294 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:47:14.554 Remove shared memory files 00:47:14.554 07:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:47:14.554 07:13:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:47:14.554 07:13:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:47:14.554 07:13:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:47:14.554 07:13:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:47:14.554 07:13:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:47:14.554 07:13:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:47:14.554 ************************************ 00:47:14.554 END TEST ftl_dirty_shutdown 00:47:14.554 ************************************ 00:47:14.554 00:47:14.554 real 3m56.053s 00:47:14.554 user 4m32.582s 00:47:14.554 sys 0m34.085s 00:47:14.554 07:13:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:14.554 07:13:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:14.554 07:13:47 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:47:14.554 07:13:47 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:47:14.554 07:13:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:14.554 07:13:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:14.554 ************************************ 00:47:14.554 START TEST ftl_upgrade_shutdown 00:47:14.554 ************************************ 00:47:14.554 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:47:14.554 * Looking for test storage... 00:47:14.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:14.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.815 --rc genhtml_branch_coverage=1 00:47:14.815 --rc genhtml_function_coverage=1 00:47:14.815 --rc genhtml_legend=1 00:47:14.815 --rc geninfo_all_blocks=1 00:47:14.815 --rc geninfo_unexecuted_blocks=1 00:47:14.815 00:47:14.815 ' 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:14.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.815 --rc genhtml_branch_coverage=1 00:47:14.815 --rc genhtml_function_coverage=1 00:47:14.815 --rc genhtml_legend=1 00:47:14.815 --rc geninfo_all_blocks=1 00:47:14.815 --rc geninfo_unexecuted_blocks=1 00:47:14.815 00:47:14.815 ' 00:47:14.815 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:14.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.815 --rc genhtml_branch_coverage=1 00:47:14.815 --rc genhtml_function_coverage=1 00:47:14.815 --rc genhtml_legend=1 00:47:14.815 --rc geninfo_all_blocks=1 00:47:14.815 --rc geninfo_unexecuted_blocks=1 00:47:14.815 00:47:14.815 ' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:14.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.816 --rc genhtml_branch_coverage=1 00:47:14.816 --rc genhtml_function_coverage=1 00:47:14.816 --rc genhtml_legend=1 00:47:14.816 --rc geninfo_all_blocks=1 00:47:14.816 --rc geninfo_unexecuted_blocks=1 00:47:14.816 00:47:14.816 ' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83338 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83338 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83338 ']' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:14.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:14.816 07:13:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:15.076 [2024-12-06 07:13:47.408407] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:15.076 [2024-12-06 07:13:47.409127] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83338 ] 00:47:15.076 [2024-12-06 07:13:47.593404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:15.336 [2024-12-06 07:13:47.718257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:47:15.905 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:47:16.165 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:47:16.424 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:47:16.424 { 00:47:16.424 "name": "basen1", 00:47:16.424 "aliases": [ 00:47:16.424 "73b8de0b-0c33-4d01-b4d9-64d3f6cb48fd" 00:47:16.424 ], 00:47:16.424 "product_name": "NVMe disk", 00:47:16.424 "block_size": 4096, 00:47:16.424 "num_blocks": 1310720, 00:47:16.424 "uuid": "73b8de0b-0c33-4d01-b4d9-64d3f6cb48fd", 00:47:16.425 "numa_id": -1, 00:47:16.425 "assigned_rate_limits": { 00:47:16.425 "rw_ios_per_sec": 0, 00:47:16.425 "rw_mbytes_per_sec": 0, 00:47:16.425 "r_mbytes_per_sec": 0, 00:47:16.425 "w_mbytes_per_sec": 0 00:47:16.425 }, 00:47:16.425 "claimed": true, 00:47:16.425 "claim_type": "read_many_write_one", 00:47:16.425 "zoned": false, 00:47:16.425 "supported_io_types": { 00:47:16.425 "read": true, 00:47:16.425 "write": true, 00:47:16.425 "unmap": true, 00:47:16.425 "flush": true, 00:47:16.425 "reset": true, 00:47:16.425 "nvme_admin": true, 00:47:16.425 "nvme_io": true, 00:47:16.425 "nvme_io_md": false, 00:47:16.425 "write_zeroes": true, 00:47:16.425 "zcopy": false, 00:47:16.425 "get_zone_info": false, 00:47:16.425 "zone_management": false, 00:47:16.425 "zone_append": false, 00:47:16.425 "compare": true, 00:47:16.425 "compare_and_write": false, 00:47:16.425 "abort": true, 00:47:16.425 "seek_hole": false, 00:47:16.425 "seek_data": false, 00:47:16.425 "copy": true, 00:47:16.425 "nvme_iov_md": false 00:47:16.425 }, 00:47:16.425 "driver_specific": { 00:47:16.425 "nvme": [ 00:47:16.425 { 00:47:16.425 "pci_address": "0000:00:11.0", 00:47:16.425 "trid": { 00:47:16.425 "trtype": "PCIe", 00:47:16.425 "traddr": "0000:00:11.0" 00:47:16.425 }, 00:47:16.425 "ctrlr_data": { 00:47:16.425 "cntlid": 0, 00:47:16.425 "vendor_id": "0x1b36", 00:47:16.425 "model_number": "QEMU NVMe Ctrl", 00:47:16.425 "serial_number": "12341", 00:47:16.425 "firmware_revision": "8.0.0", 00:47:16.425 "subnqn": "nqn.2019-08.org.qemu:12341", 00:47:16.425 "oacs": { 00:47:16.425 "security": 0, 00:47:16.425 "format": 1, 00:47:16.425 "firmware": 0, 00:47:16.425 "ns_manage": 1 00:47:16.425 }, 00:47:16.425 "multi_ctrlr": false, 00:47:16.425 "ana_reporting": false 00:47:16.425 }, 00:47:16.425 "vs": { 00:47:16.425 "nvme_version": "1.4" 00:47:16.425 }, 00:47:16.425 "ns_data": { 00:47:16.425 "id": 1, 00:47:16.425 "can_share": false 00:47:16.425 } 00:47:16.425 } 00:47:16.425 ], 00:47:16.425 "mp_policy": "active_passive" 00:47:16.425 } 00:47:16.425 } 00:47:16.425 ]' 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:47:16.425 07:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:47:16.684 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f9774b37-3afb-4768-97a2-8597fc972b2e 00:47:16.684 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:47:16.684 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9774b37-3afb-4768-97a2-8597fc972b2e 00:47:17.252 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:47:17.252 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=85455f87-4036-423d-a319-2cb2decf5749 00:47:17.252 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 85455f87-4036-423d-a319-2cb2decf5749 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=82dd6b84-d5ae-491c-b364-249f1bea685a 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 82dd6b84-d5ae-491c-b364-249f1bea685a ]] 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 82dd6b84-d5ae-491c-b364-249f1bea685a 5120 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=82dd6b84-d5ae-491c-b364-249f1bea685a 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 82dd6b84-d5ae-491c-b364-249f1bea685a 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=82dd6b84-d5ae-491c-b364-249f1bea685a 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:47:17.511 07:13:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:47:17.512 07:13:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:47:17.512 07:13:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82dd6b84-d5ae-491c-b364-249f1bea685a 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:47:17.771 { 00:47:17.771 "name": "82dd6b84-d5ae-491c-b364-249f1bea685a", 00:47:17.771 "aliases": [ 00:47:17.771 "lvs/basen1p0" 00:47:17.771 ], 00:47:17.771 "product_name": "Logical Volume", 00:47:17.771 "block_size": 4096, 00:47:17.771 "num_blocks": 5242880, 00:47:17.771 "uuid": "82dd6b84-d5ae-491c-b364-249f1bea685a", 00:47:17.771 "assigned_rate_limits": { 00:47:17.771 "rw_ios_per_sec": 0, 00:47:17.771 "rw_mbytes_per_sec": 0, 00:47:17.771 "r_mbytes_per_sec": 0, 00:47:17.771 "w_mbytes_per_sec": 0 00:47:17.771 }, 00:47:17.771 "claimed": false, 00:47:17.771 "zoned": false, 00:47:17.771 "supported_io_types": { 00:47:17.771 "read": true, 00:47:17.771 "write": true, 00:47:17.771 "unmap": true, 00:47:17.771 "flush": false, 00:47:17.771 "reset": true, 00:47:17.771 "nvme_admin": false, 00:47:17.771 "nvme_io": false, 00:47:17.771 "nvme_io_md": false, 00:47:17.771 "write_zeroes": true, 00:47:17.771 "zcopy": false, 00:47:17.771 "get_zone_info": false, 00:47:17.771 "zone_management": false, 00:47:17.771 "zone_append": false, 00:47:17.771 "compare": false, 00:47:17.771 "compare_and_write": false, 00:47:17.771 "abort": false, 00:47:17.771 "seek_hole": true, 00:47:17.771 "seek_data": true, 00:47:17.771 "copy": false, 00:47:17.771 "nvme_iov_md": false 00:47:17.771 }, 00:47:17.771 "driver_specific": { 00:47:17.771 "lvol": { 00:47:17.771 "lvol_store_uuid": "85455f87-4036-423d-a319-2cb2decf5749", 00:47:17.771 "base_bdev": "basen1", 00:47:17.771 "thin_provision": true, 00:47:17.771 "num_allocated_clusters": 0, 00:47:17.771 "snapshot": false, 00:47:17.771 "clone": false, 00:47:17.771 "esnap_clone": false 00:47:17.771 } 00:47:17.771 } 00:47:17.771 } 00:47:17.771 ]' 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:47:17.771 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:47:18.337 07:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 82dd6b84-d5ae-491c-b364-249f1bea685a -c cachen1p0 --l2p_dram_limit 2 00:47:18.597 [2024-12-06 07:13:51.080813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.080877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:47:18.597 [2024-12-06 07:13:51.080898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:18.597 [2024-12-06 07:13:51.080909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.080975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.080991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:18.597 [2024-12-06 07:13:51.081004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:47:18.597 [2024-12-06 07:13:51.081014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.081041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:47:18.597 [2024-12-06 07:13:51.081809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:47:18.597 [2024-12-06 07:13:51.081856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.081883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:18.597 [2024-12-06 07:13:51.081897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:47:18.597 [2024-12-06 07:13:51.081908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.082041] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID dd5df755-28b3-497e-b4d5-916ca521a385 00:47:18.597 [2024-12-06 07:13:51.083115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.083168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:47:18.597 [2024-12-06 07:13:51.083184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:47:18.597 [2024-12-06 07:13:51.083195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.087182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.087243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:18.597 [2024-12-06 07:13:51.087258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.941 ms 00:47:18.597 [2024-12-06 07:13:51.087270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.087324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.087342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:18.597 [2024-12-06 07:13:51.087354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:47:18.597 [2024-12-06 07:13:51.087367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.087439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.087459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:47:18.597 [2024-12-06 07:13:51.087472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:47:18.597 [2024-12-06 07:13:51.087484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.087511] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:47:18.597 [2024-12-06 07:13:51.091321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.091355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:18.597 [2024-12-06 07:13:51.091390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.816 ms 00:47:18.597 [2024-12-06 07:13:51.091400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.091435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.091449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:47:18.597 [2024-12-06 07:13:51.091463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:18.597 [2024-12-06 07:13:51.091472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.091512] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:47:18.597 [2024-12-06 07:13:51.091646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:47:18.597 [2024-12-06 07:13:51.091667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:47:18.597 [2024-12-06 07:13:51.091680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:47:18.597 [2024-12-06 07:13:51.091694] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:47:18.597 [2024-12-06 07:13:51.091706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:47:18.597 [2024-12-06 07:13:51.091718] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:47:18.597 [2024-12-06 07:13:51.091770] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:47:18.597 [2024-12-06 07:13:51.091788] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:47:18.597 [2024-12-06 07:13:51.091798] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:47:18.597 [2024-12-06 07:13:51.091810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.091821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:47:18.597 [2024-12-06 07:13:51.091833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:47:18.597 [2024-12-06 07:13:51.091859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.091945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.597 [2024-12-06 07:13:51.091970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:47:18.597 [2024-12-06 07:13:51.091984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:47:18.597 [2024-12-06 07:13:51.091995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.597 [2024-12-06 07:13:51.092116] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:47:18.597 [2024-12-06 07:13:51.092134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:47:18.597 [2024-12-06 07:13:51.092163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:18.597 [2024-12-06 07:13:51.092190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.597 [2024-12-06 07:13:51.092203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:47:18.597 [2024-12-06 07:13:51.092213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:47:18.597 [2024-12-06 07:13:51.092225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:47:18.597 [2024-12-06 07:13:51.092235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:47:18.597 [2024-12-06 07:13:51.092247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:47:18.597 [2024-12-06 07:13:51.092256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.597 [2024-12-06 07:13:51.092270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:47:18.597 [2024-12-06 07:13:51.092280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:47:18.598 [2024-12-06 07:13:51.092291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:47:18.598 [2024-12-06 07:13:51.092314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:47:18.598 [2024-12-06 07:13:51.092324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:47:18.598 [2024-12-06 07:13:51.092347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:47:18.598 [2024-12-06 07:13:51.092358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:47:18.598 [2024-12-06 07:13:51.092381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:47:18.598 [2024-12-06 07:13:51.092412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:47:18.598 [2024-12-06 07:13:51.092445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:47:18.598 [2024-12-06 07:13:51.092475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:47:18.598 [2024-12-06 07:13:51.092560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:47:18.598 [2024-12-06 07:13:51.092594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:47:18.598 [2024-12-06 07:13:51.092630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:47:18.598 [2024-12-06 07:13:51.092663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:47:18.598 [2024-12-06 07:13:51.092675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092685] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:47:18.598 [2024-12-06 07:13:51.092698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:47:18.598 [2024-12-06 07:13:51.092709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:18.598 [2024-12-06 07:13:51.092750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:47:18.598 [2024-12-06 07:13:51.092765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:47:18.598 [2024-12-06 07:13:51.092776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:47:18.598 [2024-12-06 07:13:51.092788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:47:18.598 [2024-12-06 07:13:51.092798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:47:18.598 [2024-12-06 07:13:51.092812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:47:18.598 [2024-12-06 07:13:51.092839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:47:18.598 [2024-12-06 07:13:51.092857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.092870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:47:18.598 [2024-12-06 07:13:51.092898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.092910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.092923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:47:18.598 [2024-12-06 07:13:51.092934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:47:18.598 [2024-12-06 07:13:51.092947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:47:18.598 [2024-12-06 07:13:51.092958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:47:18.598 [2024-12-06 07:13:51.092973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.092984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.092999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:47:18.598 [2024-12-06 07:13:51.093064] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:47:18.598 [2024-12-06 07:13:51.093078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:18.598 [2024-12-06 07:13:51.093104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:47:18.598 [2024-12-06 07:13:51.093115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:47:18.598 [2024-12-06 07:13:51.093128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:47:18.598 [2024-12-06 07:13:51.093140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.598 [2024-12-06 07:13:51.093153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:47:18.598 [2024-12-06 07:13:51.093165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 00:47:18.598 [2024-12-06 07:13:51.093177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.598 [2024-12-06 07:13:51.093226] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:47:18.598 [2024-12-06 07:13:51.093247] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:47:21.126 [2024-12-06 07:13:53.279289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.279344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:47:21.126 [2024-12-06 07:13:53.279363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2186.076 ms 00:47:21.126 [2024-12-06 07:13:53.279376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.308602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.308665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:21.126 [2024-12-06 07:13:53.308686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.018 ms 00:47:21.126 [2024-12-06 07:13:53.308702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.308843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.308870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:47:21.126 [2024-12-06 07:13:53.308885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:47:21.126 [2024-12-06 07:13:53.308905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.344092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.344153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:21.126 [2024-12-06 07:13:53.344170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.128 ms 00:47:21.126 [2024-12-06 07:13:53.344183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.344230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.344252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:21.126 [2024-12-06 07:13:53.344263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:47:21.126 [2024-12-06 07:13:53.344275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.344678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.344735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:21.126 [2024-12-06 07:13:53.344762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:47:21.126 [2024-12-06 07:13:53.344792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.344871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.344889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:21.126 [2024-12-06 07:13:53.344903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:47:21.126 [2024-12-06 07:13:53.344917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.359569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.359611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:21.126 [2024-12-06 07:13:53.359627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.630 ms 00:47:21.126 [2024-12-06 07:13:53.359639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.382602] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:47:21.126 [2024-12-06 07:13:53.383599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.383629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:47:21.126 [2024-12-06 07:13:53.383646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.815 ms 00:47:21.126 [2024-12-06 07:13:53.383657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.406037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.406223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:47:21.126 [2024-12-06 07:13:53.406359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.342 ms 00:47:21.126 [2024-12-06 07:13:53.406415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.406639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.406810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:47:21.126 [2024-12-06 07:13:53.406937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:47:21.126 [2024-12-06 07:13:53.406992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.432676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.432891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:47:21.126 [2024-12-06 07:13:53.433015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.510 ms 00:47:21.126 [2024-12-06 07:13:53.433072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.458467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.458646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:47:21.126 [2024-12-06 07:13:53.458677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.207 ms 00:47:21.126 [2024-12-06 07:13:53.458689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.459495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.459639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:47:21.126 [2024-12-06 07:13:53.459669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:47:21.126 [2024-12-06 07:13:53.459684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.533732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.533965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:47:21.126 [2024-12-06 07:13:53.534002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.952 ms 00:47:21.126 [2024-12-06 07:13:53.534016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.560683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.560752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:47:21.126 [2024-12-06 07:13:53.560790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.563 ms 00:47:21.126 [2024-12-06 07:13:53.560802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.586121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.586158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:47:21.126 [2024-12-06 07:13:53.586194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.235 ms 00:47:21.126 [2024-12-06 07:13:53.586205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.612042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.612080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:47:21.126 [2024-12-06 07:13:53.612099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.791 ms 00:47:21.126 [2024-12-06 07:13:53.612110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.612159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.612175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:47:21.126 [2024-12-06 07:13:53.612190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:21.126 [2024-12-06 07:13:53.612203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.612288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:21.126 [2024-12-06 07:13:53.612307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:47:21.126 [2024-12-06 07:13:53.612319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:47:21.126 [2024-12-06 07:13:53.612329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:21.126 [2024-12-06 07:13:53.613529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2532.160 ms, result 0 00:47:21.126 { 00:47:21.126 "name": "ftl", 00:47:21.126 "uuid": "dd5df755-28b3-497e-b4d5-916ca521a385" 00:47:21.126 } 00:47:21.126 07:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:47:21.384 [2024-12-06 07:13:53.904629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:21.384 07:13:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:47:21.642 07:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:47:21.902 [2024-12-06 07:13:54.381124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:47:21.902 07:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:47:22.161 [2024-12-06 07:13:54.665982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:22.161 07:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:47:22.731 Fill FTL, iteration 1 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:47:22.731 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83449 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83449 /var/tmp/spdk.tgt.sock 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83449 ']' 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:47:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:22.732 07:13:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:22.732 [2024-12-06 07:13:55.164730] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:22.732 [2024-12-06 07:13:55.165223] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83449 ] 00:47:22.989 [2024-12-06 07:13:55.329544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:22.989 [2024-12-06 07:13:55.416639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:23.557 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:23.557 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:47:23.557 07:13:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:47:23.816 ftln1 00:47:23.816 07:13:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:47:23.816 07:13:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83449 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83449 ']' 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83449 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83449 00:47:24.075 killing process with pid 83449 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83449' 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83449 00:47:24.075 07:13:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83449 00:47:25.981 07:13:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:47:25.981 07:13:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:47:25.981 [2024-12-06 07:13:58.352580] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:25.981 [2024-12-06 07:13:58.352777] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83491 ] 00:47:25.981 [2024-12-06 07:13:58.526706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:26.241 [2024-12-06 07:13:58.613650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:27.647  [2024-12-06T07:14:01.178Z] Copying: 217/1024 [MB] (217 MBps) [2024-12-06T07:14:02.116Z] Copying: 430/1024 [MB] (213 MBps) [2024-12-06T07:14:03.054Z] Copying: 647/1024 [MB] (217 MBps) [2024-12-06T07:14:03.993Z] Copying: 862/1024 [MB] (215 MBps) [2024-12-06T07:14:04.932Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:47:32.341 00:47:32.341 Calculate MD5 checksum, iteration 1 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:32.341 07:14:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:32.341 [2024-12-06 07:14:04.657414] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:32.341 [2024-12-06 07:14:04.657807] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83561 ] 00:47:32.341 [2024-12-06 07:14:04.810833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:32.341 [2024-12-06 07:14:04.894131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:33.717  [2024-12-06T07:14:07.683Z] Copying: 461/1024 [MB] (461 MBps) [2024-12-06T07:14:07.683Z] Copying: 923/1024 [MB] (462 MBps) [2024-12-06T07:14:08.251Z] Copying: 1024/1024 [MB] (average 460 MBps) 00:47:35.660 00:47:35.660 07:14:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:47:35.660 07:14:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:47:37.567 Fill FTL, iteration 2 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2135b4109fb4fe16ff8666c8be8ef0ae 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:37.567 07:14:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:47:37.567 [2024-12-06 07:14:10.116170] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:37.567 [2024-12-06 07:14:10.117307] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83619 ] 00:47:37.827 [2024-12-06 07:14:10.292862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:37.827 [2024-12-06 07:14:10.415631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:39.238  [2024-12-06T07:14:13.215Z] Copying: 215/1024 [MB] (215 MBps) [2024-12-06T07:14:13.783Z] Copying: 430/1024 [MB] (215 MBps) [2024-12-06T07:14:15.161Z] Copying: 645/1024 [MB] (215 MBps) [2024-12-06T07:14:15.727Z] Copying: 853/1024 [MB] (208 MBps) [2024-12-06T07:14:16.664Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:47:44.073 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:47:44.073 Calculate MD5 checksum, iteration 2 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:44.073 07:14:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:44.073 [2024-12-06 07:14:16.587243] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:47:44.073 [2024-12-06 07:14:16.588161] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83682 ] 00:47:44.332 [2024-12-06 07:14:16.765876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:44.332 [2024-12-06 07:14:16.852313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:46.240  [2024-12-06T07:14:19.399Z] Copying: 465/1024 [MB] (465 MBps) [2024-12-06T07:14:19.658Z] Copying: 922/1024 [MB] (457 MBps) [2024-12-06T07:14:20.594Z] Copying: 1024/1024 [MB] (average 460 MBps) 00:47:48.004 00:47:48.263 07:14:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:47:48.263 07:14:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b0373dd3e9c3c539958d881e438336aa 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:47:50.167 [2024-12-06 07:14:22.669346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.167 [2024-12-06 07:14:22.669542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:47:50.167 [2024-12-06 07:14:22.669573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:47:50.167 [2024-12-06 07:14:22.669585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.167 [2024-12-06 07:14:22.669629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.167 [2024-12-06 07:14:22.669654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:47:50.167 [2024-12-06 07:14:22.669666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:50.167 [2024-12-06 07:14:22.669676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.167 [2024-12-06 07:14:22.669703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.167 [2024-12-06 07:14:22.669780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:47:50.167 [2024-12-06 07:14:22.669794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:50.167 [2024-12-06 07:14:22.669805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.167 [2024-12-06 07:14:22.669901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.520 ms, result 0 00:47:50.167 true 00:47:50.167 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:50.425 { 00:47:50.425 "name": "ftl", 00:47:50.425 "properties": [ 00:47:50.425 { 00:47:50.425 "name": "superblock_version", 00:47:50.425 "value": 5, 00:47:50.425 "read-only": true 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "name": "base_device", 00:47:50.425 "bands": [ 00:47:50.425 { 00:47:50.425 "id": 0, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 1, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 2, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 3, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 4, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 5, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 6, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 7, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 8, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 9, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 10, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 11, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 12, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 13, 00:47:50.425 "state": "FREE", 00:47:50.425 "validity": 0.0 00:47:50.425 }, 00:47:50.425 { 00:47:50.425 "id": 14, 00:47:50.426 "state": "FREE", 00:47:50.426 "validity": 0.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 15, 00:47:50.426 "state": "FREE", 00:47:50.426 "validity": 0.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 16, 00:47:50.426 "state": "FREE", 00:47:50.426 "validity": 0.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 17, 00:47:50.426 "state": "FREE", 00:47:50.426 "validity": 0.0 00:47:50.426 } 00:47:50.426 ], 00:47:50.426 "read-only": true 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "name": "cache_device", 00:47:50.426 "type": "bdev", 00:47:50.426 "chunks": [ 00:47:50.426 { 00:47:50.426 "id": 0, 00:47:50.426 "state": "INACTIVE", 00:47:50.426 "utilization": 0.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 1, 00:47:50.426 "state": "CLOSED", 00:47:50.426 "utilization": 1.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 2, 00:47:50.426 "state": "CLOSED", 00:47:50.426 "utilization": 1.0 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 3, 00:47:50.426 "state": "OPEN", 00:47:50.426 "utilization": 0.001953125 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "id": 4, 00:47:50.426 "state": "OPEN", 00:47:50.426 "utilization": 0.0 00:47:50.426 } 00:47:50.426 ], 00:47:50.426 "read-only": true 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "name": "verbose_mode", 00:47:50.426 "value": true, 00:47:50.426 "unit": "", 00:47:50.426 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:47:50.426 }, 00:47:50.426 { 00:47:50.426 "name": "prep_upgrade_on_shutdown", 00:47:50.426 "value": false, 00:47:50.426 "unit": "", 00:47:50.426 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:47:50.426 } 00:47:50.426 ] 00:47:50.426 } 00:47:50.426 07:14:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:47:50.684 [2024-12-06 07:14:23.173886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.684 [2024-12-06 07:14:23.174159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:47:50.684 [2024-12-06 07:14:23.174280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:47:50.684 [2024-12-06 07:14:23.174403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.684 [2024-12-06 07:14:23.174561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.684 [2024-12-06 07:14:23.174680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:47:50.684 [2024-12-06 07:14:23.174857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:47:50.684 [2024-12-06 07:14:23.174976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.684 [2024-12-06 07:14:23.175159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:50.684 [2024-12-06 07:14:23.175280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:47:50.684 [2024-12-06 07:14:23.175392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:50.684 [2024-12-06 07:14:23.175439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:50.684 [2024-12-06 07:14:23.175614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.710 ms, result 0 00:47:50.684 true 00:47:50.684 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:47:50.684 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:50.684 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:47:50.943 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:47:50.943 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:47:50.943 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:47:51.201 [2024-12-06 07:14:23.622342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:51.201 [2024-12-06 07:14:23.622390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:47:51.201 [2024-12-06 07:14:23.622407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:51.201 [2024-12-06 07:14:23.622417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:51.201 [2024-12-06 07:14:23.622447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:51.201 [2024-12-06 07:14:23.622460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:47:51.201 [2024-12-06 07:14:23.622470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:51.201 [2024-12-06 07:14:23.622479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:51.201 [2024-12-06 07:14:23.622503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:51.201 [2024-12-06 07:14:23.622515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:47:51.201 [2024-12-06 07:14:23.622525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:47:51.201 [2024-12-06 07:14:23.622534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:51.201 [2024-12-06 07:14:23.622614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:47:51.201 true 00:47:51.201 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:51.460 { 00:47:51.460 "name": "ftl", 00:47:51.460 "properties": [ 00:47:51.460 { 00:47:51.460 "name": "superblock_version", 00:47:51.460 "value": 5, 00:47:51.460 "read-only": true 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "name": "base_device", 00:47:51.460 "bands": [ 00:47:51.460 { 00:47:51.460 "id": 0, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 1, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 2, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 3, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 4, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 5, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 6, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 7, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 8, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 9, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 10, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 11, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 12, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 13, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.460 { 00:47:51.460 "id": 14, 00:47:51.460 "state": "FREE", 00:47:51.460 "validity": 0.0 00:47:51.460 }, 00:47:51.461 { 00:47:51.461 "id": 15, 00:47:51.461 "state": "FREE", 00:47:51.461 "validity": 0.0 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 16, 00:47:51.461 "state": "FREE", 00:47:51.461 "validity": 0.0 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 17, 00:47:51.461 "state": "FREE", 00:47:51.461 "validity": 0.0 00:47:51.461 } 00:47:51.461 ], 00:47:51.461 "read-only": true 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "name": "cache_device", 00:47:51.461 "type": "bdev", 00:47:51.461 "chunks": [ 00:47:51.461 { 00:47:51.461 "id": 0, 00:47:51.461 "state": "INACTIVE", 00:47:51.461 "utilization": 0.0 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 1, 00:47:51.461 "state": "CLOSED", 00:47:51.461 "utilization": 1.0 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 2, 00:47:51.461 "state": "CLOSED", 00:47:51.461 "utilization": 1.0 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 3, 00:47:51.461 "state": "OPEN", 00:47:51.461 "utilization": 0.001953125 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "id": 4, 00:47:51.461 "state": "OPEN", 00:47:51.461 "utilization": 0.0 00:47:51.461 } 00:47:51.461 ], 00:47:51.461 "read-only": true 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "name": "verbose_mode", 00:47:51.461 "value": true, 00:47:51.461 "unit": "", 00:47:51.461 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:47:51.461 }, 00:47:51.461 { 00:47:51.461 "name": "prep_upgrade_on_shutdown", 00:47:51.461 "value": true, 00:47:51.461 "unit": "", 00:47:51.461 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:47:51.461 } 00:47:51.461 ] 00:47:51.461 } 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83338 ]] 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83338 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83338 ']' 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83338 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83338 00:47:51.461 killing process with pid 83338 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83338' 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83338 00:47:51.461 07:14:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83338 00:47:52.399 [2024-12-06 07:14:24.629207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:47:52.399 [2024-12-06 07:14:24.645187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:52.399 [2024-12-06 07:14:24.645231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:47:52.399 [2024-12-06 07:14:24.645265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:47:52.399 [2024-12-06 07:14:24.645276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:52.399 [2024-12-06 07:14:24.645303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:47:52.399 [2024-12-06 07:14:24.648090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:52.399 [2024-12-06 07:14:24.648264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:47:52.399 [2024-12-06 07:14:24.648312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.767 ms 00:47:52.399 [2024-12-06 07:14:24.648332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.573157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.573228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:48:00.517 [2024-12-06 07:14:32.573248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7924.845 ms 00:48:00.517 [2024-12-06 07:14:32.573263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.574373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.574399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:48:00.517 [2024-12-06 07:14:32.574412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.089 ms 00:48:00.517 [2024-12-06 07:14:32.574424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.575524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.575556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:48:00.517 [2024-12-06 07:14:32.575570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.061 ms 00:48:00.517 [2024-12-06 07:14:32.575587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.586159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.586195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:48:00.517 [2024-12-06 07:14:32.586209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.517 ms 00:48:00.517 [2024-12-06 07:14:32.586218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.593033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.593072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:48:00.517 [2024-12-06 07:14:32.593086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.779 ms 00:48:00.517 [2024-12-06 07:14:32.593096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.593179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.593197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:48:00.517 [2024-12-06 07:14:32.593214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:48:00.517 [2024-12-06 07:14:32.593224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.603256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.603291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:48:00.517 [2024-12-06 07:14:32.603304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.014 ms 00:48:00.517 [2024-12-06 07:14:32.603313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.613551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.613738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:48:00.517 [2024-12-06 07:14:32.613765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.203 ms 00:48:00.517 [2024-12-06 07:14:32.613776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.623785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.623955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:48:00.517 [2024-12-06 07:14:32.623979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.965 ms 00:48:00.517 [2024-12-06 07:14:32.623991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.633994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.517 [2024-12-06 07:14:32.634029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:48:00.517 [2024-12-06 07:14:32.634041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.927 ms 00:48:00.517 [2024-12-06 07:14:32.634050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.517 [2024-12-06 07:14:32.634088] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:48:00.517 [2024-12-06 07:14:32.634119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:48:00.517 [2024-12-06 07:14:32.634131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:48:00.517 [2024-12-06 07:14:32.634141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:48:00.517 [2024-12-06 07:14:32.634151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:00.517 [2024-12-06 07:14:32.634160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:00.518 [2024-12-06 07:14:32.634288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:48:00.518 [2024-12-06 07:14:32.634297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dd5df755-28b3-497e-b4d5-916ca521a385 00:48:00.518 [2024-12-06 07:14:32.634307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:48:00.518 [2024-12-06 07:14:32.634315] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:48:00.518 [2024-12-06 07:14:32.634324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:48:00.518 [2024-12-06 07:14:32.634333] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:48:00.518 [2024-12-06 07:14:32.634341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:48:00.518 [2024-12-06 07:14:32.634356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:48:00.518 [2024-12-06 07:14:32.634365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:48:00.518 [2024-12-06 07:14:32.634373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:48:00.518 [2024-12-06 07:14:32.634381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:48:00.518 [2024-12-06 07:14:32.634391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.518 [2024-12-06 07:14:32.634403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:48:00.518 [2024-12-06 07:14:32.634413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:48:00.518 [2024-12-06 07:14:32.634422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.647550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.518 [2024-12-06 07:14:32.647585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:48:00.518 [2024-12-06 07:14:32.647599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.109 ms 00:48:00.518 [2024-12-06 07:14:32.647616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.648063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:00.518 [2024-12-06 07:14:32.648088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:48:00.518 [2024-12-06 07:14:32.648100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:48:00.518 [2024-12-06 07:14:32.648111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.690574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.690617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:00.518 [2024-12-06 07:14:32.690637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.690653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.690688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.690701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:00.518 [2024-12-06 07:14:32.690751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.690762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.690861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.690878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:00.518 [2024-12-06 07:14:32.690889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.690906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.690927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.690940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:00.518 [2024-12-06 07:14:32.690950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.690976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.776386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.776440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:00.518 [2024-12-06 07:14:32.776472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.776490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.843678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.843775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:00.518 [2024-12-06 07:14:32.843808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.843818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.843943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.843961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:00.518 [2024-12-06 07:14:32.843973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.843984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.844063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:00.518 [2024-12-06 07:14:32.844074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.844100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.844276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:00.518 [2024-12-06 07:14:32.844287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.844298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.844365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:48:00.518 [2024-12-06 07:14:32.844377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.844387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.844443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:00.518 [2024-12-06 07:14:32.844453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.844464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:00.518 [2024-12-06 07:14:32.844562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:00.518 [2024-12-06 07:14:32.844575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:00.518 [2024-12-06 07:14:32.844585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:00.518 [2024-12-06 07:14:32.844748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8199.570 ms, result 0 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83871 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83871 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83871 ']' 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:03.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:03.057 07:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:03.315 [2024-12-06 07:14:35.675213] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:03.315 [2024-12-06 07:14:35.675370] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83871 ] 00:48:03.315 [2024-12-06 07:14:35.842443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:03.573 [2024-12-06 07:14:35.926752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:04.141 [2024-12-06 07:14:36.632936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:48:04.141 [2024-12-06 07:14:36.633012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:48:04.403 [2024-12-06 07:14:36.778626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.778847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:48:04.403 [2024-12-06 07:14:36.778876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:04.403 [2024-12-06 07:14:36.778889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.778969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.778987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:04.403 [2024-12-06 07:14:36.778998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:48:04.403 [2024-12-06 07:14:36.779008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.779048] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:48:04.403 [2024-12-06 07:14:36.779873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:48:04.403 [2024-12-06 07:14:36.779898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.779908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:04.403 [2024-12-06 07:14:36.779918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.865 ms 00:48:04.403 [2024-12-06 07:14:36.779927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.781129] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:48:04.403 [2024-12-06 07:14:36.794095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.794272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:48:04.403 [2024-12-06 07:14:36.794306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.968 ms 00:48:04.403 [2024-12-06 07:14:36.794317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.794386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.794404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:48:04.403 [2024-12-06 07:14:36.794415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:48:04.403 [2024-12-06 07:14:36.794424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.798754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.798788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:04.403 [2024-12-06 07:14:36.798801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.242 ms 00:48:04.403 [2024-12-06 07:14:36.798809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.798874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.798890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:04.403 [2024-12-06 07:14:36.798900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:48:04.403 [2024-12-06 07:14:36.798909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.798970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.798991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:48:04.403 [2024-12-06 07:14:36.799001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:48:04.403 [2024-12-06 07:14:36.799009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.799040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:48:04.403 [2024-12-06 07:14:36.802609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.802641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:04.403 [2024-12-06 07:14:36.802663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.577 ms 00:48:04.403 [2024-12-06 07:14:36.802677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.802738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.802754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:48:04.403 [2024-12-06 07:14:36.802764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:48:04.403 [2024-12-06 07:14:36.802773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.802802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:48:04.403 [2024-12-06 07:14:36.802832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:48:04.403 [2024-12-06 07:14:36.802868] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:48:04.403 [2024-12-06 07:14:36.802885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:48:04.403 [2024-12-06 07:14:36.802976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:48:04.403 [2024-12-06 07:14:36.802989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:48:04.403 [2024-12-06 07:14:36.803001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:48:04.403 [2024-12-06 07:14:36.803013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:48:04.403 [2024-12-06 07:14:36.803023] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:48:04.403 [2024-12-06 07:14:36.803037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:48:04.403 [2024-12-06 07:14:36.803046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:48:04.403 [2024-12-06 07:14:36.803054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:48:04.403 [2024-12-06 07:14:36.803063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:48:04.403 [2024-12-06 07:14:36.803073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.803098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:48:04.403 [2024-12-06 07:14:36.803108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:48:04.403 [2024-12-06 07:14:36.803116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.803190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.403 [2024-12-06 07:14:36.803201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:48:04.403 [2024-12-06 07:14:36.803222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:48:04.403 [2024-12-06 07:14:36.803231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.403 [2024-12-06 07:14:36.803343] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:48:04.403 [2024-12-06 07:14:36.803360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:48:04.403 [2024-12-06 07:14:36.803369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:04.403 [2024-12-06 07:14:36.803378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:48:04.403 [2024-12-06 07:14:36.803395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:48:04.403 [2024-12-06 07:14:36.803411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:48:04.403 [2024-12-06 07:14:36.803421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:48:04.403 [2024-12-06 07:14:36.803429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:48:04.403 [2024-12-06 07:14:36.803444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:48:04.403 [2024-12-06 07:14:36.803452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:48:04.403 [2024-12-06 07:14:36.803473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:48:04.403 [2024-12-06 07:14:36.803481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:48:04.403 [2024-12-06 07:14:36.803496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:48:04.403 [2024-12-06 07:14:36.803503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.403 [2024-12-06 07:14:36.803511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:48:04.403 [2024-12-06 07:14:36.803519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:48:04.403 [2024-12-06 07:14:36.803527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:48:04.404 [2024-12-06 07:14:36.803554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:48:04.404 [2024-12-06 07:14:36.803562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:48:04.404 [2024-12-06 07:14:36.803578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:48:04.404 [2024-12-06 07:14:36.803585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:48:04.404 [2024-12-06 07:14:36.803600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:48:04.404 [2024-12-06 07:14:36.803608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:48:04.404 [2024-12-06 07:14:36.803623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:48:04.404 [2024-12-06 07:14:36.803631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:48:04.404 [2024-12-06 07:14:36.803646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:48:04.404 [2024-12-06 07:14:36.803669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:48:04.404 [2024-12-06 07:14:36.803693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:48:04.404 [2024-12-06 07:14:36.803700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803724] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:48:04.404 [2024-12-06 07:14:36.803735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:48:04.404 [2024-12-06 07:14:36.803747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:04.404 [2024-12-06 07:14:36.803768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:48:04.404 [2024-12-06 07:14:36.803777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:48:04.404 [2024-12-06 07:14:36.803784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:48:04.404 [2024-12-06 07:14:36.803792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:48:04.404 [2024-12-06 07:14:36.803800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:48:04.404 [2024-12-06 07:14:36.803808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:48:04.404 [2024-12-06 07:14:36.803817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:48:04.404 [2024-12-06 07:14:36.803828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:48:04.404 [2024-12-06 07:14:36.803846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:48:04.404 [2024-12-06 07:14:36.803872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:48:04.404 [2024-12-06 07:14:36.803880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:48:04.404 [2024-12-06 07:14:36.803889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:48:04.404 [2024-12-06 07:14:36.803897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:48:04.404 [2024-12-06 07:14:36.803957] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:48:04.404 [2024-12-06 07:14:36.803966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:04.404 [2024-12-06 07:14:36.803985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:48:04.404 [2024-12-06 07:14:36.803993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:48:04.404 [2024-12-06 07:14:36.804001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:48:04.404 [2024-12-06 07:14:36.804011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:04.404 [2024-12-06 07:14:36.804020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:48:04.404 [2024-12-06 07:14:36.804032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:48:04.404 [2024-12-06 07:14:36.804041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:04.404 [2024-12-06 07:14:36.804091] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:48:04.404 [2024-12-06 07:14:36.804107] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:48:06.944 [2024-12-06 07:14:39.132305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.944 [2024-12-06 07:14:39.132610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:48:06.944 [2024-12-06 07:14:39.132740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2328.228 ms 00:48:06.944 [2024-12-06 07:14:39.132792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.944 [2024-12-06 07:14:39.158132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.944 [2024-12-06 07:14:39.158330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:06.944 [2024-12-06 07:14:39.158464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.990 ms 00:48:06.944 [2024-12-06 07:14:39.158510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.158645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.158840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:48:06.945 [2024-12-06 07:14:39.158888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:48:06.945 [2024-12-06 07:14:39.158920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.190495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.190669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:06.945 [2024-12-06 07:14:39.190812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.490 ms 00:48:06.945 [2024-12-06 07:14:39.190858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.190927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.190968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:06.945 [2024-12-06 07:14:39.191001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:06.945 [2024-12-06 07:14:39.191031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.191555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.191716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:06.945 [2024-12-06 07:14:39.191837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:48:06.945 [2024-12-06 07:14:39.191880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.192047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.192090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:06.945 [2024-12-06 07:14:39.192123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:48:06.945 [2024-12-06 07:14:39.192154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.206821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.206857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:06.945 [2024-12-06 07:14:39.206888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.635 ms 00:48:06.945 [2024-12-06 07:14:39.206897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.228247] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:48:06.945 [2024-12-06 07:14:39.228284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:48:06.945 [2024-12-06 07:14:39.228300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.228309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:48:06.945 [2024-12-06 07:14:39.228320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.282 ms 00:48:06.945 [2024-12-06 07:14:39.228328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.242482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.242519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:48:06.945 [2024-12-06 07:14:39.242533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.111 ms 00:48:06.945 [2024-12-06 07:14:39.242542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.254653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.254688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:48:06.945 [2024-12-06 07:14:39.254702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.068 ms 00:48:06.945 [2024-12-06 07:14:39.254755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.266758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.266794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:48:06.945 [2024-12-06 07:14:39.266822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.960 ms 00:48:06.945 [2024-12-06 07:14:39.266831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.267560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.267594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:48:06.945 [2024-12-06 07:14:39.267623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.622 ms 00:48:06.945 [2024-12-06 07:14:39.267631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.324910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.325008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:48:06.945 [2024-12-06 07:14:39.325041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.240 ms 00:48:06.945 [2024-12-06 07:14:39.325051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.335206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:48:06.945 [2024-12-06 07:14:39.335838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.335870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:48:06.945 [2024-12-06 07:14:39.335885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.716 ms 00:48:06.945 [2024-12-06 07:14:39.335894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.336006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.336025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:48:06.945 [2024-12-06 07:14:39.336036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:48:06.945 [2024-12-06 07:14:39.336046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.336149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.336166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:48:06.945 [2024-12-06 07:14:39.336176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:48:06.945 [2024-12-06 07:14:39.336185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.336215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.336233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:48:06.945 [2024-12-06 07:14:39.336243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:48:06.945 [2024-12-06 07:14:39.336252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.336285] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:48:06.945 [2024-12-06 07:14:39.336298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.336307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:48:06.945 [2024-12-06 07:14:39.336316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:48:06.945 [2024-12-06 07:14:39.336325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.361662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.361840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:48:06.945 [2024-12-06 07:14:39.361972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.311 ms 00:48:06.945 [2024-12-06 07:14:39.362085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.362226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:06.945 [2024-12-06 07:14:39.362280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:48:06.945 [2024-12-06 07:14:39.362381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:48:06.945 [2024-12-06 07:14:39.362422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:06.945 [2024-12-06 07:14:39.363742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2584.556 ms, result 0 00:48:06.945 [2024-12-06 07:14:39.378444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:06.945 [2024-12-06 07:14:39.394448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:48:06.945 [2024-12-06 07:14:39.402550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:48:07.881 [2024-12-06 07:14:40.399414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:07.881 [2024-12-06 07:14:40.399467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:48:07.881 [2024-12-06 07:14:40.399485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:48:07.881 [2024-12-06 07:14:40.399494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:07.881 [2024-12-06 07:14:40.399530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:07.881 [2024-12-06 07:14:40.399542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:48:07.881 [2024-12-06 07:14:40.399553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:48:07.881 [2024-12-06 07:14:40.399561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:07.881 [2024-12-06 07:14:40.399583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:07.881 [2024-12-06 07:14:40.399594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:48:07.881 [2024-12-06 07:14:40.399603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:48:07.881 [2024-12-06 07:14:40.399616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:07.881 [2024-12-06 07:14:40.399676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.269 ms, result 0 00:48:07.881 true 00:48:07.881 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:48:08.140 { 00:48:08.140 "name": "ftl", 00:48:08.140 "properties": [ 00:48:08.140 { 00:48:08.140 "name": "superblock_version", 00:48:08.140 "value": 5, 00:48:08.140 "read-only": true 00:48:08.140 }, 00:48:08.140 { 00:48:08.141 "name": "base_device", 00:48:08.141 "bands": [ 00:48:08.141 { 00:48:08.141 "id": 0, 00:48:08.141 "state": "CLOSED", 00:48:08.141 "validity": 1.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 1, 00:48:08.141 "state": "CLOSED", 00:48:08.141 "validity": 1.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 2, 00:48:08.141 "state": "CLOSED", 00:48:08.141 "validity": 0.007843137254901933 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 3, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 4, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 5, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 6, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 7, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 8, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 9, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 10, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 11, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 12, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 13, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 14, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 15, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 16, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 17, 00:48:08.141 "state": "FREE", 00:48:08.141 "validity": 0.0 00:48:08.141 } 00:48:08.141 ], 00:48:08.141 "read-only": true 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "name": "cache_device", 00:48:08.141 "type": "bdev", 00:48:08.141 "chunks": [ 00:48:08.141 { 00:48:08.141 "id": 0, 00:48:08.141 "state": "INACTIVE", 00:48:08.141 "utilization": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 1, 00:48:08.141 "state": "OPEN", 00:48:08.141 "utilization": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 2, 00:48:08.141 "state": "OPEN", 00:48:08.141 "utilization": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 3, 00:48:08.141 "state": "FREE", 00:48:08.141 "utilization": 0.0 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "id": 4, 00:48:08.141 "state": "FREE", 00:48:08.141 "utilization": 0.0 00:48:08.141 } 00:48:08.141 ], 00:48:08.141 "read-only": true 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "name": "verbose_mode", 00:48:08.141 "value": true, 00:48:08.141 "unit": "", 00:48:08.141 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:48:08.141 }, 00:48:08.141 { 00:48:08.141 "name": "prep_upgrade_on_shutdown", 00:48:08.141 "value": false, 00:48:08.141 "unit": "", 00:48:08.141 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:48:08.141 } 00:48:08.141 ] 00:48:08.141 } 00:48:08.141 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:48:08.141 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:48:08.141 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:48:08.401 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:48:08.401 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:48:08.401 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:48:08.401 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:48:08.401 07:14:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:08.661 Validate MD5 checksum, iteration 1 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:48:08.661 07:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:48:08.921 [2024-12-06 07:14:41.318024] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:08.921 [2024-12-06 07:14:41.318160] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83950 ] 00:48:08.921 [2024-12-06 07:14:41.485831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:09.181 [2024-12-06 07:14:41.598276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:10.562  [2024-12-06T07:14:44.532Z] Copying: 508/1024 [MB] (508 MBps) [2024-12-06T07:14:44.532Z] Copying: 1005/1024 [MB] (497 MBps) [2024-12-06T07:14:45.469Z] Copying: 1024/1024 [MB] (average 502 MBps) 00:48:12.878 00:48:12.878 07:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:48:12.878 07:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:48:14.779 Validate MD5 checksum, iteration 2 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2135b4109fb4fe16ff8666c8be8ef0ae 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2135b4109fb4fe16ff8666c8be8ef0ae != \2\1\3\5\b\4\1\0\9\f\b\4\f\e\1\6\f\f\8\6\6\6\c\8\b\e\8\e\f\0\a\e ]] 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:48:14.779 07:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:48:14.779 [2024-12-06 07:14:47.256231] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:14.779 [2024-12-06 07:14:47.256943] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84013 ] 00:48:15.036 [2024-12-06 07:14:47.443564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:15.036 [2024-12-06 07:14:47.569309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:17.000  [2024-12-06T07:14:50.165Z] Copying: 493/1024 [MB] (493 MBps) [2024-12-06T07:14:50.422Z] Copying: 982/1024 [MB] (489 MBps) [2024-12-06T07:14:50.988Z] Copying: 1024/1024 [MB] (average 492 MBps) 00:48:18.397 00:48:18.397 07:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:48:18.397 07:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0373dd3e9c3c539958d881e438336aa 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0373dd3e9c3c539958d881e438336aa != \b\0\3\7\3\d\d\3\e\9\c\3\c\5\3\9\9\5\8\d\8\8\1\e\4\3\8\3\3\6\a\a ]] 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83871 ]] 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83871 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84069 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84069 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84069 ']' 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:20.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:20.299 07:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:20.558 [2024-12-06 07:14:52.935801] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:20.558 [2024-12-06 07:14:52.936262] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84069 ] 00:48:20.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83871 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:48:20.558 [2024-12-06 07:14:53.115344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:20.817 [2024-12-06 07:14:53.194458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.385 [2024-12-06 07:14:53.903059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:48:21.385 [2024-12-06 07:14:53.903366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:48:21.646 [2024-12-06 07:14:54.048000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.048042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:48:21.646 [2024-12-06 07:14:54.048060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:21.646 [2024-12-06 07:14:54.048070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.048141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.048158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:21.646 [2024-12-06 07:14:54.048169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:48:21.646 [2024-12-06 07:14:54.048177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.048212] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:48:21.646 [2024-12-06 07:14:54.049085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:48:21.646 [2024-12-06 07:14:54.049111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.049139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:21.646 [2024-12-06 07:14:54.049150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:48:21.646 [2024-12-06 07:14:54.049175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.049629] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:48:21.646 [2024-12-06 07:14:54.067988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.068030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:48:21.646 [2024-12-06 07:14:54.068047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.360 ms 00:48:21.646 [2024-12-06 07:14:54.068058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.079688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.079755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:48:21.646 [2024-12-06 07:14:54.079772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:48:21.646 [2024-12-06 07:14:54.079782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.080243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.080265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:21.646 [2024-12-06 07:14:54.080276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:48:21.646 [2024-12-06 07:14:54.080285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.080344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.080359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:21.646 [2024-12-06 07:14:54.080369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:48:21.646 [2024-12-06 07:14:54.080393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.080456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.080470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:48:21.646 [2024-12-06 07:14:54.080481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:48:21.646 [2024-12-06 07:14:54.080490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.080545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:48:21.646 [2024-12-06 07:14:54.084107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.084141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:21.646 [2024-12-06 07:14:54.084155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.578 ms 00:48:21.646 [2024-12-06 07:14:54.084164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.084197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.084210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:48:21.646 [2024-12-06 07:14:54.084221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:21.646 [2024-12-06 07:14:54.084229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.084269] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:48:21.646 [2024-12-06 07:14:54.084296] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:48:21.646 [2024-12-06 07:14:54.084330] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:48:21.646 [2024-12-06 07:14:54.084348] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:48:21.646 [2024-12-06 07:14:54.084434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:48:21.646 [2024-12-06 07:14:54.084446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:48:21.646 [2024-12-06 07:14:54.084458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:48:21.646 [2024-12-06 07:14:54.084469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:48:21.646 [2024-12-06 07:14:54.084479] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:48:21.646 [2024-12-06 07:14:54.084488] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:48:21.646 [2024-12-06 07:14:54.084497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:48:21.646 [2024-12-06 07:14:54.084504] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:48:21.646 [2024-12-06 07:14:54.084513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:48:21.646 [2024-12-06 07:14:54.084565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.084576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:48:21.646 [2024-12-06 07:14:54.084587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:48:21.646 [2024-12-06 07:14:54.084596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.084686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.646 [2024-12-06 07:14:54.084700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:48:21.646 [2024-12-06 07:14:54.084710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:48:21.646 [2024-12-06 07:14:54.084733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.646 [2024-12-06 07:14:54.084895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:48:21.646 [2024-12-06 07:14:54.084916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:48:21.646 [2024-12-06 07:14:54.084928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:21.646 [2024-12-06 07:14:54.084939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.084964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:48:21.646 [2024-12-06 07:14:54.084973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.084983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:48:21.646 [2024-12-06 07:14:54.085009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:48:21.646 [2024-12-06 07:14:54.085032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:48:21.646 [2024-12-06 07:14:54.085041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:48:21.646 [2024-12-06 07:14:54.085074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:48:21.646 [2024-12-06 07:14:54.085082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:48:21.646 [2024-12-06 07:14:54.085105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:48:21.646 [2024-12-06 07:14:54.085113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:48:21.646 [2024-12-06 07:14:54.085148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:48:21.646 [2024-12-06 07:14:54.085157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:48:21.646 [2024-12-06 07:14:54.085174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:48:21.646 [2024-12-06 07:14:54.085209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:21.646 [2024-12-06 07:14:54.085219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:48:21.646 [2024-12-06 07:14:54.085228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:48:21.646 [2024-12-06 07:14:54.085237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:21.646 [2024-12-06 07:14:54.085246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:48:21.646 [2024-12-06 07:14:54.085255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:48:21.646 [2024-12-06 07:14:54.085263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:21.646 [2024-12-06 07:14:54.085272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:48:21.646 [2024-12-06 07:14:54.085281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:48:21.646 [2024-12-06 07:14:54.085290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:21.646 [2024-12-06 07:14:54.085299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:48:21.646 [2024-12-06 07:14:54.085308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:48:21.646 [2024-12-06 07:14:54.085318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:48:21.646 [2024-12-06 07:14:54.085336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:48:21.646 [2024-12-06 07:14:54.085345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.646 [2024-12-06 07:14:54.085354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:48:21.647 [2024-12-06 07:14:54.085362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:48:21.647 [2024-12-06 07:14:54.085371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.647 [2024-12-06 07:14:54.085380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:48:21.647 [2024-12-06 07:14:54.085389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:48:21.647 [2024-12-06 07:14:54.085398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.647 [2024-12-06 07:14:54.085407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:48:21.647 [2024-12-06 07:14:54.085417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:48:21.647 [2024-12-06 07:14:54.085429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:21.647 [2024-12-06 07:14:54.085439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:21.647 [2024-12-06 07:14:54.085449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:48:21.647 [2024-12-06 07:14:54.085458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:48:21.647 [2024-12-06 07:14:54.085467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:48:21.647 [2024-12-06 07:14:54.085476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:48:21.647 [2024-12-06 07:14:54.085485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:48:21.647 [2024-12-06 07:14:54.085494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:48:21.647 [2024-12-06 07:14:54.085504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:48:21.647 [2024-12-06 07:14:54.085518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:48:21.647 [2024-12-06 07:14:54.085539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:48:21.647 [2024-12-06 07:14:54.085570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:48:21.647 [2024-12-06 07:14:54.085580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:48:21.647 [2024-12-06 07:14:54.085590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:48:21.647 [2024-12-06 07:14:54.085600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:48:21.647 [2024-12-06 07:14:54.085674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:48:21.647 [2024-12-06 07:14:54.085686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:21.647 [2024-12-06 07:14:54.085727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:48:21.647 [2024-12-06 07:14:54.085753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:48:21.647 [2024-12-06 07:14:54.085764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:48:21.647 [2024-12-06 07:14:54.085777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.085789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:48:21.647 [2024-12-06 07:14:54.085803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.981 ms 00:48:21.647 [2024-12-06 07:14:54.085814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.111826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.111874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:21.647 [2024-12-06 07:14:54.111908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.932 ms 00:48:21.647 [2024-12-06 07:14:54.111917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.111968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.111982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:48:21.647 [2024-12-06 07:14:54.111992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:48:21.647 [2024-12-06 07:14:54.112001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.143418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.143464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:21.647 [2024-12-06 07:14:54.143480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.349 ms 00:48:21.647 [2024-12-06 07:14:54.143489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.143538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.143552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:21.647 [2024-12-06 07:14:54.143563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:48:21.647 [2024-12-06 07:14:54.143577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.143763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.143781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:21.647 [2024-12-06 07:14:54.143792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.107 ms 00:48:21.647 [2024-12-06 07:14:54.143811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.143879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.143893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:21.647 [2024-12-06 07:14:54.143904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:48:21.647 [2024-12-06 07:14:54.143914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.158343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.158381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:21.647 [2024-12-06 07:14:54.158397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.396 ms 00:48:21.647 [2024-12-06 07:14:54.158411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.158543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.158569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:48:21.647 [2024-12-06 07:14:54.158582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:48:21.647 [2024-12-06 07:14:54.158591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.188482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.188695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:48:21.647 [2024-12-06 07:14:54.188757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.868 ms 00:48:21.647 [2024-12-06 07:14:54.188774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.647 [2024-12-06 07:14:54.198512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.647 [2024-12-06 07:14:54.198548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:48:21.647 [2024-12-06 07:14:54.198571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:48:21.647 [2024-12-06 07:14:54.198580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.256615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.256692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:48:21.907 [2024-12-06 07:14:54.256765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.970 ms 00:48:21.907 [2024-12-06 07:14:54.256779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.257015] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:48:21.907 [2024-12-06 07:14:54.257212] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:48:21.907 [2024-12-06 07:14:54.257333] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:48:21.907 [2024-12-06 07:14:54.257451] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:48:21.907 [2024-12-06 07:14:54.257465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.257476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:48:21.907 [2024-12-06 07:14:54.257487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:48:21.907 [2024-12-06 07:14:54.257497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.257618] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:48:21.907 [2024-12-06 07:14:54.257638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.257653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:48:21.907 [2024-12-06 07:14:54.257664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:48:21.907 [2024-12-06 07:14:54.257674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.272653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.272917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:48:21.907 [2024-12-06 07:14:54.272944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.950 ms 00:48:21.907 [2024-12-06 07:14:54.272958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.284593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.284629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:48:21.907 [2024-12-06 07:14:54.284645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:48:21.907 [2024-12-06 07:14:54.284655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:21.907 [2024-12-06 07:14:54.284792] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:48:21.907 [2024-12-06 07:14:54.284955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:21.907 [2024-12-06 07:14:54.284967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:48:21.907 [2024-12-06 07:14:54.284977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:48:21.907 [2024-12-06 07:14:54.284986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:22.475 [2024-12-06 07:14:54.865030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:22.475 [2024-12-06 07:14:54.865318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:48:22.475 [2024-12-06 07:14:54.865348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 579.040 ms 00:48:22.475 [2024-12-06 07:14:54.865361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:22.475 [2024-12-06 07:14:54.869983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:22.475 [2024-12-06 07:14:54.870027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:48:22.475 [2024-12-06 07:14:54.870059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.026 ms 00:48:22.475 [2024-12-06 07:14:54.870071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:22.475 [2024-12-06 07:14:54.870588] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:48:22.475 [2024-12-06 07:14:54.870614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:22.475 [2024-12-06 07:14:54.870627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:48:22.475 [2024-12-06 07:14:54.870639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:48:22.475 [2024-12-06 07:14:54.870666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:22.475 [2024-12-06 07:14:54.870723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:22.475 [2024-12-06 07:14:54.870757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:48:22.475 [2024-12-06 07:14:54.870770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:22.475 [2024-12-06 07:14:54.870787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:22.475 [2024-12-06 07:14:54.870866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 586.075 ms, result 0 00:48:22.475 [2024-12-06 07:14:54.870929] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:48:22.475 [2024-12-06 07:14:54.871007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:22.475 [2024-12-06 07:14:54.871020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:48:22.475 [2024-12-06 07:14:54.871046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:48:22.475 [2024-12-06 07:14:54.871055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.464879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.465247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:48:23.042 [2024-12-06 07:14:55.465310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 592.657 ms 00:48:23.042 [2024-12-06 07:14:55.465322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.469927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.469971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:48:23.042 [2024-12-06 07:14:55.469987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:48:23.042 [2024-12-06 07:14:55.469997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.470435] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:48:23.042 [2024-12-06 07:14:55.470467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.470479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:48:23.042 [2024-12-06 07:14:55.470491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:48:23.042 [2024-12-06 07:14:55.470500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.470542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.470559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:48:23.042 [2024-12-06 07:14:55.470570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:23.042 [2024-12-06 07:14:55.470580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.470626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 599.696 ms, result 0 00:48:23.042 [2024-12-06 07:14:55.470675] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:48:23.042 [2024-12-06 07:14:55.470705] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:48:23.042 [2024-12-06 07:14:55.470733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.470747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:48:23.042 [2024-12-06 07:14:55.470759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1185.991 ms 00:48:23.042 [2024-12-06 07:14:55.470769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.470828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.470849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:48:23.042 [2024-12-06 07:14:55.470861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:48:23.042 [2024-12-06 07:14:55.470870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.481543] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:48:23.042 [2024-12-06 07:14:55.481684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.481701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:48:23.042 [2024-12-06 07:14:55.481713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.794 ms 00:48:23.042 [2024-12-06 07:14:55.481783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.482546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.482593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:48:23.042 [2024-12-06 07:14:55.482627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:48:23.042 [2024-12-06 07:14:55.482637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.484979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:48:23.042 [2024-12-06 07:14:55.485033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.318 ms 00:48:23.042 [2024-12-06 07:14:55.485043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.485085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:48:23.042 [2024-12-06 07:14:55.485110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:48:23.042 [2024-12-06 07:14:55.485124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.485224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:48:23.042 [2024-12-06 07:14:55.485249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:48:23.042 [2024-12-06 07:14:55.485258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.485282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:48:23.042 [2024-12-06 07:14:55.485303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:48:23.042 [2024-12-06 07:14:55.485312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.485350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:48:23.042 [2024-12-06 07:14:55.485364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:48:23.042 [2024-12-06 07:14:55.485383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:48:23.042 [2024-12-06 07:14:55.485392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.485443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:23.042 [2024-12-06 07:14:55.485456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:48:23.042 [2024-12-06 07:14:55.485465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:48:23.042 [2024-12-06 07:14:55.485474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:23.042 [2024-12-06 07:14:55.486762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1438.286 ms, result 0 00:48:23.042 [2024-12-06 07:14:55.502201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:23.042 [2024-12-06 07:14:55.518206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:48:23.042 [2024-12-06 07:14:55.526525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:23.042 Validate MD5 checksum, iteration 1 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:48:23.042 07:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:48:23.301 [2024-12-06 07:14:55.669509] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:23.301 [2024-12-06 07:14:55.669907] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84104 ] 00:48:23.301 [2024-12-06 07:14:55.840815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:23.560 [2024-12-06 07:14:55.957171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:24.941  [2024-12-06T07:14:58.913Z] Copying: 502/1024 [MB] (502 MBps) [2024-12-06T07:14:58.913Z] Copying: 994/1024 [MB] (492 MBps) [2024-12-06T07:14:59.849Z] Copying: 1024/1024 [MB] (average 497 MBps) 00:48:27.258 00:48:27.258 07:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:48:27.258 07:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:48:29.158 Validate MD5 checksum, iteration 2 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2135b4109fb4fe16ff8666c8be8ef0ae 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2135b4109fb4fe16ff8666c8be8ef0ae != \2\1\3\5\b\4\1\0\9\f\b\4\f\e\1\6\f\f\8\6\6\6\c\8\b\e\8\e\f\0\a\e ]] 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:48:29.158 07:15:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:48:29.158 [2024-12-06 07:15:01.643428] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:29.158 [2024-12-06 07:15:01.643821] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84170 ] 00:48:29.417 [2024-12-06 07:15:01.829068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.417 [2024-12-06 07:15:01.946829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:31.319  [2024-12-06T07:15:04.849Z] Copying: 498/1024 [MB] (498 MBps) [2024-12-06T07:15:04.849Z] Copying: 972/1024 [MB] (474 MBps) [2024-12-06T07:15:06.224Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:48:33.633 00:48:33.633 07:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:48:33.633 07:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0373dd3e9c3c539958d881e438336aa 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0373dd3e9c3c539958d881e438336aa != \b\0\3\7\3\d\d\3\e\9\c\3\c\5\3\9\9\5\8\d\8\8\1\e\4\3\8\3\3\6\a\a ]] 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:48:35.532 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:48:35.533 07:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84069 ]] 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84069 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84069 ']' 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84069 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84069 00:48:35.533 killing process with pid 84069 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84069' 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84069 00:48:35.533 07:15:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84069 00:48:36.469 [2024-12-06 07:15:08.798587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:48:36.469 [2024-12-06 07:15:08.812143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.469 [2024-12-06 07:15:08.812185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:48:36.469 [2024-12-06 07:15:08.812203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:48:36.469 [2024-12-06 07:15:08.812213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.469 [2024-12-06 07:15:08.812239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:48:36.469 [2024-12-06 07:15:08.815014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.469 [2024-12-06 07:15:08.815185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:48:36.469 [2024-12-06 07:15:08.815217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.756 ms 00:48:36.469 [2024-12-06 07:15:08.815229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.469 [2024-12-06 07:15:08.815498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.469 [2024-12-06 07:15:08.815515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:48:36.470 [2024-12-06 07:15:08.815527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:48:36.470 [2024-12-06 07:15:08.815536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.818068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.818237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:48:36.470 [2024-12-06 07:15:08.818345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.513 ms 00:48:36.470 [2024-12-06 07:15:08.818398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.819470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.819624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:48:36.470 [2024-12-06 07:15:08.819662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.002 ms 00:48:36.470 [2024-12-06 07:15:08.819675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.830222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.830402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:48:36.470 [2024-12-06 07:15:08.830429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.423 ms 00:48:36.470 [2024-12-06 07:15:08.830448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.836861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.837024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:48:36.470 [2024-12-06 07:15:08.837049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.368 ms 00:48:36.470 [2024-12-06 07:15:08.837061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.837147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.837165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:48:36.470 [2024-12-06 07:15:08.837176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:48:36.470 [2024-12-06 07:15:08.837192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.847594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.847629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:48:36.470 [2024-12-06 07:15:08.847643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.367 ms 00:48:36.470 [2024-12-06 07:15:08.847652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.858176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.858210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:48:36.470 [2024-12-06 07:15:08.858224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.489 ms 00:48:36.470 [2024-12-06 07:15:08.858232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.868167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.868200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:48:36.470 [2024-12-06 07:15:08.868214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.900 ms 00:48:36.470 [2024-12-06 07:15:08.868223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.878282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.878315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:48:36.470 [2024-12-06 07:15:08.878328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.998 ms 00:48:36.470 [2024-12-06 07:15:08.878336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.878371] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:48:36.470 [2024-12-06 07:15:08.878391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:48:36.470 [2024-12-06 07:15:08.878402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:48:36.470 [2024-12-06 07:15:08.878412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:48:36.470 [2024-12-06 07:15:08.878421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:36.470 [2024-12-06 07:15:08.878554] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:48:36.470 [2024-12-06 07:15:08.878563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dd5df755-28b3-497e-b4d5-916ca521a385 00:48:36.470 [2024-12-06 07:15:08.878572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:48:36.470 [2024-12-06 07:15:08.878581] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:48:36.470 [2024-12-06 07:15:08.878589] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:48:36.470 [2024-12-06 07:15:08.878598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:48:36.470 [2024-12-06 07:15:08.878606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:48:36.470 [2024-12-06 07:15:08.878615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:48:36.470 [2024-12-06 07:15:08.878629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:48:36.470 [2024-12-06 07:15:08.878637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:48:36.470 [2024-12-06 07:15:08.878644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:48:36.470 [2024-12-06 07:15:08.878655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.878664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:48:36.470 [2024-12-06 07:15:08.878674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:48:36.470 [2024-12-06 07:15:08.878683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.891686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.891758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:48:36.470 [2024-12-06 07:15:08.891789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.983 ms 00:48:36.470 [2024-12-06 07:15:08.891799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.893449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:36.470 [2024-12-06 07:15:08.893478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:48:36.470 [2024-12-06 07:15:08.893491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:48:36.470 [2024-12-06 07:15:08.893501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.935983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.470 [2024-12-06 07:15:08.936023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:36.470 [2024-12-06 07:15:08.936052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.470 [2024-12-06 07:15:08.936067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.936100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.470 [2024-12-06 07:15:08.936128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:36.470 [2024-12-06 07:15:08.936138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.470 [2024-12-06 07:15:08.936146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.936250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.470 [2024-12-06 07:15:08.936267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:36.470 [2024-12-06 07:15:08.936277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.470 [2024-12-06 07:15:08.936286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:08.936311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.470 [2024-12-06 07:15:08.936323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:36.470 [2024-12-06 07:15:08.936332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.470 [2024-12-06 07:15:08.936341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.470 [2024-12-06 07:15:09.015812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.470 [2024-12-06 07:15:09.015870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:36.470 [2024-12-06 07:15:09.015887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.470 [2024-12-06 07:15:09.015896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.081764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.081810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:36.729 [2024-12-06 07:15:09.081826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.081836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.081923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.081938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:36.729 [2024-12-06 07:15:09.081949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.081958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.082054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:36.729 [2024-12-06 07:15:09.082064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.082073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.082191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:36.729 [2024-12-06 07:15:09.082201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.082210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.082266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:48:36.729 [2024-12-06 07:15:09.082282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.082290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.082341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:36.729 [2024-12-06 07:15:09.082350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.082358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:48:36.729 [2024-12-06 07:15:09.082420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:36.729 [2024-12-06 07:15:09.082429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:48:36.729 [2024-12-06 07:15:09.082437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:36.729 [2024-12-06 07:15:09.082556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 270.383 ms, result 0 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:37.665 Remove shared memory files 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83871 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:48:37.665 ************************************ 00:48:37.665 END TEST ftl_upgrade_shutdown 00:48:37.665 ************************************ 00:48:37.665 00:48:37.665 real 1m22.926s 00:48:37.665 user 1m58.627s 00:48:37.665 sys 0m21.503s 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:37.665 07:15:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@14 -- # killprocess 76514 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 76514 ']' 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@958 -- # kill -0 76514 00:48:37.665 Process with pid 76514 is not found 00:48:37.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76514) - No such process 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76514 is not found' 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84285 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:37.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:37.665 07:15:10 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84285 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@835 -- # '[' -z 84285 ']' 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:37.665 07:15:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:37.665 [2024-12-06 07:15:10.159191] Starting SPDK v25.01-pre git sha1 f501a7223 / DPDK 24.03.0 initialization... 00:48:37.665 [2024-12-06 07:15:10.159365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84285 ] 00:48:37.923 [2024-12-06 07:15:10.337715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:37.923 [2024-12-06 07:15:10.415607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:38.491 07:15:11 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:38.491 07:15:11 ftl -- common/autotest_common.sh@868 -- # return 0 00:48:38.491 07:15:11 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:48:39.058 nvme0n1 00:48:39.058 07:15:11 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:48:39.058 07:15:11 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:39.058 07:15:11 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:39.058 07:15:11 ftl -- ftl/common.sh@28 -- # stores=85455f87-4036-423d-a319-2cb2decf5749 00:48:39.058 07:15:11 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:48:39.058 07:15:11 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85455f87-4036-423d-a319-2cb2decf5749 00:48:39.317 07:15:11 ftl -- ftl/ftl.sh@23 -- # killprocess 84285 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@954 -- # '[' -z 84285 ']' 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@958 -- # kill -0 84285 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@959 -- # uname 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84285 00:48:39.317 killing process with pid 84285 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84285' 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@973 -- # kill 84285 00:48:39.317 07:15:11 ftl -- common/autotest_common.sh@978 -- # wait 84285 00:48:41.223 07:15:13 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:41.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:41.223 Waiting for block devices as requested 00:48:41.223 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:48:41.482 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:41.482 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:48:41.482 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:48:46.750 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:48:46.750 Remove shared memory files 00:48:46.750 07:15:19 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:48:46.750 07:15:19 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:46.750 07:15:19 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:48:46.750 07:15:19 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:48:46.750 07:15:19 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:48:46.750 07:15:19 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:46.750 07:15:19 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:48:46.750 ************************************ 00:48:46.750 END TEST ftl 00:48:46.750 ************************************ 00:48:46.750 00:48:46.750 real 11m47.985s 00:48:46.750 user 14m37.520s 00:48:46.750 sys 1m26.050s 00:48:46.750 07:15:19 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:46.750 07:15:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:46.750 07:15:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:46.750 07:15:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:48:46.750 07:15:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:46.750 07:15:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:48:46.750 07:15:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:46.750 07:15:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:46.750 07:15:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:48:46.750 07:15:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:48:46.750 07:15:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:48:46.750 07:15:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:48:46.750 07:15:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:46.750 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:48:46.750 07:15:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:48:46.750 07:15:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:48:46.750 07:15:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:48:46.750 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:48:48.651 INFO: APP EXITING 00:48:48.651 INFO: killing all VMs 00:48:48.651 INFO: killing vhost app 00:48:48.651 INFO: EXIT DONE 00:48:48.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:49.219 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:48:49.219 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:48:49.219 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:48:49.219 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:48:49.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:50.046 Cleaning 00:48:50.046 Removing: /var/run/dpdk/spdk0/config 00:48:50.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:50.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:50.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:50.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:50.046 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:50.046 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:50.046 Removing: /var/run/dpdk/spdk0 00:48:50.046 Removing: /var/run/dpdk/spdk_pid57690 00:48:50.046 Removing: /var/run/dpdk/spdk_pid57914 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58138 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58242 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58292 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58426 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58444 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58644 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58749 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58856 00:48:50.046 Removing: /var/run/dpdk/spdk_pid58977 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59075 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59120 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59157 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59227 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59339 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59808 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59885 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59959 00:48:50.046 Removing: /var/run/dpdk/spdk_pid59975 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60115 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60131 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60265 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60288 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60352 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60370 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60434 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60452 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60640 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60676 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60765 00:48:50.046 Removing: /var/run/dpdk/spdk_pid60948 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61042 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61080 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61563 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61661 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61780 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61834 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61854 00:48:50.046 Removing: /var/run/dpdk/spdk_pid61938 00:48:50.046 Removing: /var/run/dpdk/spdk_pid62572 00:48:50.046 Removing: /var/run/dpdk/spdk_pid62610 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63131 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63234 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63350 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63403 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63434 00:48:50.046 Removing: /var/run/dpdk/spdk_pid63460 00:48:50.046 Removing: /var/run/dpdk/spdk_pid65342 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65487 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65491 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65514 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65554 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65558 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65570 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65615 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65619 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65631 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65681 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65685 00:48:50.326 Removing: /var/run/dpdk/spdk_pid65697 00:48:50.326 Removing: /var/run/dpdk/spdk_pid67091 00:48:50.326 Removing: /var/run/dpdk/spdk_pid67205 00:48:50.326 Removing: /var/run/dpdk/spdk_pid68623 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70395 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70473 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70551 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70664 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70757 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70853 00:48:50.326 Removing: /var/run/dpdk/spdk_pid70937 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71008 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71118 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71215 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71312 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71392 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71468 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71581 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71679 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71780 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71854 00:48:50.326 Removing: /var/run/dpdk/spdk_pid71934 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72041 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72138 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72234 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72308 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72388 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72468 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72541 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72646 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72742 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72845 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72914 00:48:50.326 Removing: /var/run/dpdk/spdk_pid72995 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73071 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73146 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73257 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73349 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73493 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73777 00:48:50.326 Removing: /var/run/dpdk/spdk_pid73814 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74275 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74460 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74555 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74664 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74707 00:48:50.326 Removing: /var/run/dpdk/spdk_pid74733 00:48:50.326 Removing: /var/run/dpdk/spdk_pid75043 00:48:50.326 Removing: /var/run/dpdk/spdk_pid75098 00:48:50.326 Removing: /var/run/dpdk/spdk_pid75176 00:48:50.326 Removing: /var/run/dpdk/spdk_pid75578 00:48:50.326 Removing: /var/run/dpdk/spdk_pid75723 00:48:50.326 Removing: /var/run/dpdk/spdk_pid76514 00:48:50.326 Removing: /var/run/dpdk/spdk_pid76651 00:48:50.326 Removing: /var/run/dpdk/spdk_pid76834 00:48:50.326 Removing: /var/run/dpdk/spdk_pid76937 00:48:50.326 Removing: /var/run/dpdk/spdk_pid77291 00:48:50.326 Removing: /var/run/dpdk/spdk_pid77573 00:48:50.326 Removing: /var/run/dpdk/spdk_pid77913 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78106 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78250 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78303 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78452 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78483 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78536 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78746 00:48:50.326 Removing: /var/run/dpdk/spdk_pid78966 00:48:50.326 Removing: /var/run/dpdk/spdk_pid79409 00:48:50.326 Removing: /var/run/dpdk/spdk_pid79891 00:48:50.326 Removing: /var/run/dpdk/spdk_pid80352 00:48:50.326 Removing: /var/run/dpdk/spdk_pid80904 00:48:50.327 Removing: /var/run/dpdk/spdk_pid81041 00:48:50.327 Removing: /var/run/dpdk/spdk_pid81129 00:48:50.327 Removing: /var/run/dpdk/spdk_pid81816 00:48:50.327 Removing: /var/run/dpdk/spdk_pid81888 00:48:50.327 Removing: /var/run/dpdk/spdk_pid82365 00:48:50.327 Removing: /var/run/dpdk/spdk_pid82788 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83338 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83449 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83491 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83561 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83619 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83682 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83871 00:48:50.608 Removing: /var/run/dpdk/spdk_pid83950 00:48:50.608 Removing: /var/run/dpdk/spdk_pid84013 00:48:50.608 Removing: /var/run/dpdk/spdk_pid84069 00:48:50.608 Removing: /var/run/dpdk/spdk_pid84104 00:48:50.608 Removing: /var/run/dpdk/spdk_pid84170 00:48:50.608 Removing: /var/run/dpdk/spdk_pid84285 00:48:50.608 Clean 00:48:50.608 07:15:23 -- common/autotest_common.sh@1453 -- # return 0 00:48:50.608 07:15:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:48:50.608 07:15:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:50.608 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:48:50.608 07:15:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:48:50.608 07:15:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:50.608 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:48:50.608 07:15:23 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:50.608 07:15:23 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:50.608 07:15:23 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:50.608 07:15:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:48:50.608 07:15:23 -- spdk/autotest.sh@398 -- # hostname 00:48:50.608 07:15:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:50.877 geninfo: WARNING: invalid characters removed from testname! 00:49:17.422 07:15:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:17.422 07:15:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:19.954 07:15:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:21.859 07:15:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:24.388 07:15:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:27.676 07:15:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:29.580 07:16:01 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:29.580 07:16:01 -- spdk/autorun.sh@1 -- $ timing_finish 00:49:29.580 07:16:01 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:49:29.580 07:16:01 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:29.580 07:16:01 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:29.580 07:16:01 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:29.580 + [[ -n 5290 ]] 00:49:29.580 + sudo kill 5290 00:49:29.590 [Pipeline] } 00:49:29.607 [Pipeline] // timeout 00:49:29.612 [Pipeline] } 00:49:29.626 [Pipeline] // stage 00:49:29.631 [Pipeline] } 00:49:29.646 [Pipeline] // catchError 00:49:29.655 [Pipeline] stage 00:49:29.658 [Pipeline] { (Stop VM) 00:49:29.668 [Pipeline] sh 00:49:29.959 + vagrant halt 00:49:33.245 ==> default: Halting domain... 00:49:39.820 [Pipeline] sh 00:49:40.107 + vagrant destroy -f 00:49:42.645 ==> default: Removing domain... 00:49:42.913 [Pipeline] sh 00:49:43.185 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:49:43.194 [Pipeline] } 00:49:43.204 [Pipeline] // stage 00:49:43.208 [Pipeline] } 00:49:43.216 [Pipeline] // dir 00:49:43.220 [Pipeline] } 00:49:43.229 [Pipeline] // wrap 00:49:43.233 [Pipeline] } 00:49:43.240 [Pipeline] // catchError 00:49:43.247 [Pipeline] stage 00:49:43.248 [Pipeline] { (Epilogue) 00:49:43.258 [Pipeline] sh 00:49:43.536 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:48.855 [Pipeline] catchError 00:49:48.858 [Pipeline] { 00:49:48.873 [Pipeline] sh 00:49:49.154 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:49.414 Artifacts sizes are good 00:49:49.424 [Pipeline] } 00:49:49.441 [Pipeline] // catchError 00:49:49.452 [Pipeline] archiveArtifacts 00:49:49.459 Archiving artifacts 00:49:49.572 [Pipeline] cleanWs 00:49:49.585 [WS-CLEANUP] Deleting project workspace... 00:49:49.585 [WS-CLEANUP] Deferred wipeout is used... 00:49:49.591 [WS-CLEANUP] done 00:49:49.593 [Pipeline] } 00:49:49.610 [Pipeline] // stage 00:49:49.616 [Pipeline] } 00:49:49.630 [Pipeline] // node 00:49:49.635 [Pipeline] End of Pipeline 00:49:49.672 Finished: SUCCESS