00:00:00.001 Started by upstream project "autotest-per-patch" build number 132457 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.222 Using shallow fetch with depth 1 00:00:00.222 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.222 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.294 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.294 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.754 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.767 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.778 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.778 > git config core.sparsecheckout # timeout=10 00:00:06.790 > git read-tree -mu HEAD # timeout=10 00:00:06.804 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.825 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.825 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.909 [Pipeline] Start of Pipeline 00:00:06.922 [Pipeline] library 00:00:06.923 Loading library shm_lib@master 00:00:06.923 Library shm_lib@master is cached. Copying from home. 00:00:06.940 [Pipeline] node 00:00:06.962 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.964 [Pipeline] { 00:00:06.974 [Pipeline] catchError 00:00:06.976 [Pipeline] { 00:00:06.987 [Pipeline] wrap 00:00:06.995 [Pipeline] { 00:00:07.003 [Pipeline] stage 00:00:07.005 [Pipeline] { (Prologue) 00:00:07.023 [Pipeline] echo 00:00:07.025 Node: VM-host-WFP1 00:00:07.030 [Pipeline] cleanWs 00:00:07.040 [WS-CLEANUP] Deleting project workspace... 00:00:07.040 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.047 [WS-CLEANUP] done 00:00:07.242 [Pipeline] setCustomBuildProperty 00:00:07.312 [Pipeline] httpRequest 00:00:07.692 [Pipeline] echo 00:00:07.693 Sorcerer 10.211.164.20 is alive 00:00:07.700 [Pipeline] retry 00:00:07.701 [Pipeline] { 00:00:07.713 [Pipeline] httpRequest 00:00:07.717 HttpMethod: GET 00:00:07.717 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.718 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.731 Response Code: HTTP/1.1 200 OK 00:00:07.732 Success: Status code 200 is in the accepted range: 200,404 00:00:07.732 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.431 [Pipeline] } 00:00:13.449 [Pipeline] // retry 00:00:13.457 [Pipeline] sh 00:00:13.745 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.762 [Pipeline] httpRequest 00:00:14.377 [Pipeline] echo 00:00:14.379 Sorcerer 10.211.164.20 is alive 00:00:14.389 [Pipeline] retry 00:00:14.391 [Pipeline] { 00:00:14.444 [Pipeline] httpRequest 00:00:14.461 HttpMethod: GET 00:00:14.462 URL: http://10.211.164.20/packages/spdk_a6ed92877954e6f64e13266e9bf0a461c7c17f13.tar.gz 00:00:14.462 Sending request to url: http://10.211.164.20/packages/spdk_a6ed92877954e6f64e13266e9bf0a461c7c17f13.tar.gz 00:00:14.464 Response Code: HTTP/1.1 200 OK 00:00:14.466 Success: Status code 200 is in the accepted range: 200,404 00:00:14.467 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a6ed92877954e6f64e13266e9bf0a461c7c17f13.tar.gz 00:01:55.541 [Pipeline] } 00:01:55.560 [Pipeline] // retry 00:01:55.567 [Pipeline] sh 00:01:55.851 + tar --no-same-owner -xf spdk_a6ed92877954e6f64e13266e9bf0a461c7c17f13.tar.gz 00:01:58.402 [Pipeline] sh 00:01:58.683 + git -C spdk log --oneline -n5 00:01:58.683 a6ed92877 scripts/perf: Include hidden path devices in queue setup 00:01:58.683 8bbc7b697 nvmf: Block ctrlr-only admin cmds if NSID is set 00:01:58.683 d66a1e46f test/nvme/interrupt: Verify pre|post IO cpu load 00:01:58.683 e0d7428b4 lvol: Add custom metadata page size to lvstore 00:01:58.683 2dc4a231a blob: Add support for variable metadata page size 00:01:58.716 [Pipeline] writeFile 00:01:58.767 [Pipeline] sh 00:01:59.051 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.064 [Pipeline] sh 00:01:59.345 + cat autorun-spdk.conf 00:01:59.345 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.345 SPDK_TEST_NVME=1 00:01:59.345 SPDK_TEST_FTL=1 00:01:59.345 SPDK_TEST_ISAL=1 00:01:59.345 SPDK_RUN_ASAN=1 00:01:59.345 SPDK_RUN_UBSAN=1 00:01:59.345 SPDK_TEST_XNVME=1 00:01:59.345 SPDK_TEST_NVME_FDP=1 00:01:59.345 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.352 RUN_NIGHTLY=0 00:01:59.355 [Pipeline] } 00:01:59.369 [Pipeline] // stage 00:01:59.384 [Pipeline] stage 00:01:59.386 [Pipeline] { (Run VM) 00:01:59.399 [Pipeline] sh 00:01:59.682 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.682 + echo 'Start stage prepare_nvme.sh' 00:01:59.682 Start stage prepare_nvme.sh 00:01:59.682 + [[ -n 3 ]] 00:01:59.682 + disk_prefix=ex3 00:01:59.682 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:59.682 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:59.682 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:59.682 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.682 ++ SPDK_TEST_NVME=1 00:01:59.682 ++ SPDK_TEST_FTL=1 00:01:59.682 ++ SPDK_TEST_ISAL=1 00:01:59.682 ++ SPDK_RUN_ASAN=1 00:01:59.682 ++ SPDK_RUN_UBSAN=1 00:01:59.682 ++ SPDK_TEST_XNVME=1 00:01:59.682 ++ SPDK_TEST_NVME_FDP=1 00:01:59.682 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.682 ++ RUN_NIGHTLY=0 00:01:59.682 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:59.682 + nvme_files=() 00:01:59.682 + declare -A nvme_files 00:01:59.682 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.682 + nvme_files['nvme.img']=5G 00:01:59.682 + nvme_files['nvme-cmb.img']=5G 00:01:59.682 + nvme_files['nvme-multi0.img']=4G 00:01:59.682 + nvme_files['nvme-multi1.img']=4G 00:01:59.682 + nvme_files['nvme-multi2.img']=4G 00:01:59.682 + nvme_files['nvme-openstack.img']=8G 00:01:59.682 + nvme_files['nvme-zns.img']=5G 00:01:59.682 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.682 + (( SPDK_TEST_FTL == 1 )) 00:01:59.682 + nvme_files["nvme-ftl.img"]=6G 00:01:59.682 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.682 + nvme_files["nvme-fdp.img"]=1G 00:01:59.682 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.682 + for nvme in "${!nvme_files[@]}" 00:01:59.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:59.682 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.682 + for nvme in "${!nvme_files[@]}" 00:01:59.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:59.682 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:59.682 + for nvme in "${!nvme_files[@]}" 00:01:59.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:59.941 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.941 + for nvme in "${!nvme_files[@]}" 00:01:59.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:59.941 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:59.941 + for nvme in "${!nvme_files[@]}" 00:01:59.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:59.941 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.941 + for nvme in "${!nvme_files[@]}" 00:01:59.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:59.941 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.941 + for nvme in "${!nvme_files[@]}" 00:01:59.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:59.942 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.942 + for nvme in "${!nvme_files[@]}" 00:01:59.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:02:00.201 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:00.201 + for nvme in "${!nvme_files[@]}" 00:02:00.201 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:00.201 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.201 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:00.201 + echo 'End stage prepare_nvme.sh' 00:02:00.201 End stage prepare_nvme.sh 00:02:00.213 [Pipeline] sh 00:02:00.497 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:00.497 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:00.497 00:02:00.497 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:00.497 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:00.497 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:00.497 HELP=0 00:02:00.497 DRY_RUN=0 00:02:00.497 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:02:00.497 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:00.497 NVME_AUTO_CREATE=0 00:02:00.497 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:02:00.497 NVME_CMB=,,,, 00:02:00.497 NVME_PMR=,,,, 00:02:00.497 NVME_ZNS=,,,, 00:02:00.497 NVME_MS=true,,,, 00:02:00.497 NVME_FDP=,,,on, 00:02:00.497 SPDK_VAGRANT_DISTRO=fedora39 00:02:00.497 SPDK_VAGRANT_VMCPU=10 00:02:00.497 SPDK_VAGRANT_VMRAM=12288 00:02:00.497 SPDK_VAGRANT_PROVIDER=libvirt 00:02:00.497 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:00.497 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:00.497 SPDK_OPENSTACK_NETWORK=0 00:02:00.497 VAGRANT_PACKAGE_BOX=0 00:02:00.497 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:00.497 FORCE_DISTRO=true 00:02:00.497 VAGRANT_BOX_VERSION= 00:02:00.497 EXTRA_VAGRANTFILES= 00:02:00.497 NIC_MODEL=e1000 00:02:00.497 00:02:00.497 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:00.497 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:03.035 Bringing machine 'default' up with 'libvirt' provider... 00:02:03.973 ==> default: Creating image (snapshot of base box volume). 00:02:04.233 ==> default: Creating domain with the following settings... 00:02:04.233 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732263758_2e09ba84fad67e88b19e 00:02:04.233 ==> default: -- Domain type: kvm 00:02:04.233 ==> default: -- Cpus: 10 00:02:04.233 ==> default: -- Feature: acpi 00:02:04.233 ==> default: -- Feature: apic 00:02:04.233 ==> default: -- Feature: pae 00:02:04.233 ==> default: -- Memory: 12288M 00:02:04.233 ==> default: -- Memory Backing: hugepages: 00:02:04.233 ==> default: -- Management MAC: 00:02:04.233 ==> default: -- Loader: 00:02:04.233 ==> default: -- Nvram: 00:02:04.233 ==> default: -- Base box: spdk/fedora39 00:02:04.233 ==> default: -- Storage pool: default 00:02:04.233 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732263758_2e09ba84fad67e88b19e.img (20G) 00:02:04.233 ==> default: -- Volume Cache: default 00:02:04.233 ==> default: -- Kernel: 00:02:04.233 ==> default: -- Initrd: 00:02:04.233 ==> default: -- Graphics Type: vnc 00:02:04.233 ==> default: -- Graphics Port: -1 00:02:04.233 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.233 ==> default: -- Graphics Password: Not defined 00:02:04.233 ==> default: -- Video Type: cirrus 00:02:04.233 ==> default: -- Video VRAM: 9216 00:02:04.233 ==> default: -- Sound Type: 00:02:04.233 ==> default: -- Keymap: en-us 00:02:04.233 ==> default: -- TPM Path: 00:02:04.233 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.233 ==> default: -- Command line args: 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:04.233 ==> default: -> value=-drive, 00:02:04.233 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:04.233 ==> default: -> value=-device, 00:02:04.233 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.802 ==> default: Creating shared folders metadata... 00:02:04.802 ==> default: Starting domain. 00:02:06.711 ==> default: Waiting for domain to get an IP address... 00:02:24.884 ==> default: Waiting for SSH to become available... 00:02:24.884 ==> default: Configuring and enabling network interfaces... 00:02:29.075 default: SSH address: 192.168.121.58:22 00:02:29.075 default: SSH username: vagrant 00:02:29.075 default: SSH auth method: private key 00:02:31.722 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:41.726 ==> default: Mounting SSHFS shared folder... 00:02:43.102 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:43.102 ==> default: Checking Mount.. 00:02:44.482 ==> default: Folder Successfully Mounted! 00:02:44.482 ==> default: Running provisioner: file... 00:02:45.862 default: ~/.gitconfig => .gitconfig 00:02:46.122 00:02:46.122 SUCCESS! 00:02:46.122 00:02:46.122 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:46.122 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:46.122 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:46.122 00:02:46.132 [Pipeline] } 00:02:46.148 [Pipeline] // stage 00:02:46.158 [Pipeline] dir 00:02:46.158 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:46.160 [Pipeline] { 00:02:46.173 [Pipeline] catchError 00:02:46.175 [Pipeline] { 00:02:46.187 [Pipeline] sh 00:02:46.471 + vagrant ssh-config --host vagrant 00:02:46.471 + sed -ne /^Host/,$p 00:02:46.471 + tee ssh_conf 00:02:49.011 Host vagrant 00:02:49.011 HostName 192.168.121.58 00:02:49.011 User vagrant 00:02:49.011 Port 22 00:02:49.011 UserKnownHostsFile /dev/null 00:02:49.011 StrictHostKeyChecking no 00:02:49.011 PasswordAuthentication no 00:02:49.011 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:49.011 IdentitiesOnly yes 00:02:49.011 LogLevel FATAL 00:02:49.011 ForwardAgent yes 00:02:49.011 ForwardX11 yes 00:02:49.011 00:02:49.046 [Pipeline] withEnv 00:02:49.049 [Pipeline] { 00:02:49.063 [Pipeline] sh 00:02:49.371 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:49.371 source /etc/os-release 00:02:49.371 [[ -e /image.version ]] && img=$(< /image.version) 00:02:49.371 # Minimal, systemd-like check. 00:02:49.371 if [[ -e /.dockerenv ]]; then 00:02:49.371 # Clear garbage from the node's name: 00:02:49.371 # agt-er_autotest_547-896 -> autotest_547-896 00:02:49.371 # $HOSTNAME is the actual container id 00:02:49.371 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:49.371 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:49.371 # We can assume this is a mount from a host where container is running, 00:02:49.371 # so fetch its hostname to easily identify the target swarm worker. 00:02:49.371 container="$(< /etc/hostname) ($agent)" 00:02:49.371 else 00:02:49.371 # Fallback 00:02:49.371 container=$agent 00:02:49.371 fi 00:02:49.371 fi 00:02:49.371 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:49.371 00:02:49.643 [Pipeline] } 00:02:49.658 [Pipeline] // withEnv 00:02:49.666 [Pipeline] setCustomBuildProperty 00:02:49.680 [Pipeline] stage 00:02:49.682 [Pipeline] { (Tests) 00:02:49.699 [Pipeline] sh 00:02:49.982 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:50.256 [Pipeline] sh 00:02:50.538 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.811 [Pipeline] timeout 00:02:50.811 Timeout set to expire in 50 min 00:02:50.813 [Pipeline] { 00:02:50.827 [Pipeline] sh 00:02:51.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:51.676 HEAD is now at a6ed92877 scripts/perf: Include hidden path devices in queue setup 00:02:51.689 [Pipeline] sh 00:02:51.972 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:52.244 [Pipeline] sh 00:02:52.525 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.801 [Pipeline] sh 00:02:53.084 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:53.343 ++ readlink -f spdk_repo 00:02:53.343 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:53.343 + [[ -n /home/vagrant/spdk_repo ]] 00:02:53.343 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:53.343 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:53.343 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:53.343 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:53.343 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:53.343 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:53.343 + cd /home/vagrant/spdk_repo 00:02:53.343 + source /etc/os-release 00:02:53.343 ++ NAME='Fedora Linux' 00:02:53.343 ++ VERSION='39 (Cloud Edition)' 00:02:53.343 ++ ID=fedora 00:02:53.343 ++ VERSION_ID=39 00:02:53.343 ++ VERSION_CODENAME= 00:02:53.343 ++ PLATFORM_ID=platform:f39 00:02:53.343 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:53.343 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:53.343 ++ LOGO=fedora-logo-icon 00:02:53.343 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:53.343 ++ HOME_URL=https://fedoraproject.org/ 00:02:53.343 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:53.343 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:53.343 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:53.343 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:53.343 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:53.343 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:53.343 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:53.343 ++ SUPPORT_END=2024-11-12 00:02:53.343 ++ VARIANT='Cloud Edition' 00:02:53.343 ++ VARIANT_ID=cloud 00:02:53.343 + uname -a 00:02:53.343 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:53.343 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:54.171 Hugepages 00:02:54.171 node hugesize free / total 00:02:54.171 node0 1048576kB 0 / 0 00:02:54.171 node0 2048kB 0 / 0 00:02:54.171 00:02:54.171 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:54.171 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:54.171 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:54.171 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:54.171 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:54.171 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:54.171 + rm -f /tmp/spdk-ld-path 00:02:54.171 + source autorun-spdk.conf 00:02:54.171 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.171 ++ SPDK_TEST_NVME=1 00:02:54.171 ++ SPDK_TEST_FTL=1 00:02:54.171 ++ SPDK_TEST_ISAL=1 00:02:54.171 ++ SPDK_RUN_ASAN=1 00:02:54.171 ++ SPDK_RUN_UBSAN=1 00:02:54.171 ++ SPDK_TEST_XNVME=1 00:02:54.171 ++ SPDK_TEST_NVME_FDP=1 00:02:54.171 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.171 ++ RUN_NIGHTLY=0 00:02:54.171 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:54.171 + [[ -n '' ]] 00:02:54.171 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:54.430 + for M in /var/spdk/build-*-manifest.txt 00:02:54.430 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:54.430 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.430 + for M in /var/spdk/build-*-manifest.txt 00:02:54.430 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:54.430 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.430 + for M in /var/spdk/build-*-manifest.txt 00:02:54.430 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:54.430 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.430 ++ uname 00:02:54.430 + [[ Linux == \L\i\n\u\x ]] 00:02:54.430 + sudo dmesg -T 00:02:54.430 + sudo dmesg --clear 00:02:54.430 + dmesg_pid=5259 00:02:54.430 + [[ Fedora Linux == FreeBSD ]] 00:02:54.430 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:54.430 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:54.430 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:54.430 + [[ -x /usr/src/fio-static/fio ]] 00:02:54.430 + sudo dmesg -Tw 00:02:54.430 + export FIO_BIN=/usr/src/fio-static/fio 00:02:54.430 + FIO_BIN=/usr/src/fio-static/fio 00:02:54.430 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:54.430 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:54.430 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:54.430 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:54.430 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:54.430 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:54.430 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:54.430 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:54.430 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:54.690 08:23:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:54.690 08:23:29 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.690 08:23:29 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:54.690 08:23:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:54.690 08:23:29 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:54.690 08:23:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:54.690 08:23:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:54.690 08:23:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:54.690 08:23:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:54.690 08:23:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.690 08:23:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.690 08:23:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.690 08:23:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.690 08:23:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.690 08:23:29 -- paths/export.sh@5 -- $ export PATH 00:02:54.690 08:23:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.690 08:23:29 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:54.690 08:23:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:54.690 08:23:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732263809.XXXXXX 00:02:54.690 08:23:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732263809.7iKlt8 00:02:54.690 08:23:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:54.690 08:23:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:54.690 08:23:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:54.690 08:23:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:54.690 08:23:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:54.690 08:23:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:54.690 08:23:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:54.690 08:23:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.690 08:23:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:54.690 08:23:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:54.690 08:23:29 -- pm/common@17 -- $ local monitor 00:02:54.690 08:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.690 08:23:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.690 08:23:29 -- pm/common@25 -- $ sleep 1 00:02:54.690 08:23:29 -- pm/common@21 -- $ date +%s 00:02:54.690 08:23:29 -- pm/common@21 -- $ date +%s 00:02:54.690 08:23:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732263809 00:02:54.690 08:23:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732263809 00:02:54.690 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732263809_collect-cpu-load.pm.log 00:02:54.690 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732263809_collect-vmstat.pm.log 00:02:55.627 08:23:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:55.627 08:23:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:55.627 08:23:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:55.627 08:23:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.627 08:23:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:55.627 Fri Nov 22 08:23:30 AM UTC 2024 00:02:55.627 08:23:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:55.627 v25.01-pre-234-ga6ed92877 00:02:55.627 08:23:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:55.627 08:23:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:55.627 08:23:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.627 08:23:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.627 08:23:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.886 ************************************ 00:02:55.886 START TEST asan 00:02:55.886 ************************************ 00:02:55.886 using asan 00:02:55.886 08:23:30 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:55.886 00:02:55.886 real 0m0.000s 00:02:55.886 user 0m0.000s 00:02:55.886 sys 0m0.000s 00:02:55.886 08:23:30 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.886 ************************************ 00:02:55.886 END TEST asan 00:02:55.886 08:23:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.886 ************************************ 00:02:55.886 08:23:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:55.886 08:23:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:55.886 08:23:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.886 08:23:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.886 08:23:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.886 ************************************ 00:02:55.886 START TEST ubsan 00:02:55.886 ************************************ 00:02:55.886 using ubsan 00:02:55.886 08:23:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:55.886 00:02:55.886 real 0m0.000s 00:02:55.886 user 0m0.000s 00:02:55.886 sys 0m0.000s 00:02:55.886 08:23:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.886 08:23:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.886 ************************************ 00:02:55.886 END TEST ubsan 00:02:55.886 ************************************ 00:02:55.886 08:23:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:55.886 08:23:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.886 08:23:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.886 08:23:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:56.145 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:56.145 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.404 Using 'verbs' RDMA provider 00:03:12.673 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.772 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.772 Creating mk/config.mk...done. 00:03:30.772 Creating mk/cc.flags.mk...done. 00:03:30.772 Type 'make' to build. 00:03:30.772 08:24:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.772 08:24:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:30.772 08:24:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:30.772 08:24:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.772 ************************************ 00:03:30.772 START TEST make 00:03:30.772 ************************************ 00:03:30.772 08:24:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:30.772 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:30.772 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:30.772 meson setup builddir \ 00:03:30.772 -Dwith-libaio=enabled \ 00:03:30.772 -Dwith-liburing=enabled \ 00:03:30.772 -Dwith-libvfn=disabled \ 00:03:30.772 -Dwith-spdk=disabled \ 00:03:30.772 -Dexamples=false \ 00:03:30.772 -Dtests=false \ 00:03:30.772 -Dtools=false && \ 00:03:30.772 meson compile -C builddir && \ 00:03:30.772 cd -) 00:03:30.772 make[1]: Nothing to be done for 'all'. 00:03:31.031 The Meson build system 00:03:31.031 Version: 1.5.0 00:03:31.031 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:31.031 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:31.031 Build type: native build 00:03:31.031 Project name: xnvme 00:03:31.031 Project version: 0.7.5 00:03:31.031 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.031 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.031 Host machine cpu family: x86_64 00:03:31.031 Host machine cpu: x86_64 00:03:31.031 Message: host_machine.system: linux 00:03:31.031 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:31.031 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:31.031 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:31.031 Run-time dependency threads found: YES 00:03:31.031 Has header "setupapi.h" : NO 00:03:31.031 Has header "linux/blkzoned.h" : YES 00:03:31.031 Has header "linux/blkzoned.h" : YES (cached) 00:03:31.031 Has header "libaio.h" : YES 00:03:31.031 Library aio found: YES 00:03:31.031 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.031 Run-time dependency liburing found: YES 2.2 00:03:31.031 Dependency libvfn skipped: feature with-libvfn disabled 00:03:31.031 Found CMake: /usr/bin/cmake (3.27.7) 00:03:31.031 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:31.031 Subproject spdk : skipped: feature with-spdk disabled 00:03:31.031 Run-time dependency appleframeworks found: NO (tried framework) 00:03:31.031 Run-time dependency appleframeworks found: NO (tried framework) 00:03:31.031 Library rt found: YES 00:03:31.031 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:31.031 Configuring xnvme_config.h using configuration 00:03:31.031 Configuring xnvme.spec using configuration 00:03:31.031 Run-time dependency bash-completion found: YES 2.11 00:03:31.031 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:31.031 Program cp found: YES (/usr/bin/cp) 00:03:31.031 Build targets in project: 3 00:03:31.031 00:03:31.031 xnvme 0.7.5 00:03:31.031 00:03:31.031 Subprojects 00:03:31.031 spdk : NO Feature 'with-spdk' disabled 00:03:31.031 00:03:31.031 User defined options 00:03:31.031 examples : false 00:03:31.031 tests : false 00:03:31.031 tools : false 00:03:31.031 with-libaio : enabled 00:03:31.031 with-liburing: enabled 00:03:31.031 with-libvfn : disabled 00:03:31.031 with-spdk : disabled 00:03:31.031 00:03:31.031 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:31.598 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:31.598 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:31.598 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:31.598 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:31.598 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:31.598 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:31.598 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:31.598 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:31.598 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:31.598 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:31.598 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:31.598 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:31.598 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:31.856 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:31.856 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:31.856 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:31.856 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:31.856 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:31.856 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:31.856 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:31.856 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:31.856 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:31.856 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:31.856 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:31.856 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:31.856 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:31.856 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:31.856 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:31.857 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:31.857 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:31.857 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:31.857 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:31.857 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:31.857 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:31.857 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:31.857 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:31.857 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:31.857 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:31.857 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:31.857 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:31.857 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:31.857 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:31.857 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:31.857 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:31.857 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:32.115 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:32.115 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:32.115 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:32.115 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:32.115 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:32.115 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:32.115 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:32.115 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:32.115 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:32.115 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:32.115 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:32.115 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:32.115 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:32.115 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:32.115 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:32.115 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:32.115 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:32.115 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:32.115 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:32.115 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:32.115 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:32.115 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:32.115 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:32.372 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:32.372 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:32.372 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:32.372 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:32.372 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:32.372 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:32.629 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:32.629 [75/76] Linking static target lib/libxnvme.a 00:03:32.629 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:32.629 INFO: autodetecting backend as ninja 00:03:32.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:32.887 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:40.989 The Meson build system 00:03:40.990 Version: 1.5.0 00:03:40.990 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:40.990 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:40.990 Build type: native build 00:03:40.990 Program cat found: YES (/usr/bin/cat) 00:03:40.990 Project name: DPDK 00:03:40.990 Project version: 24.03.0 00:03:40.990 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.990 C linker for the host machine: cc ld.bfd 2.40-14 00:03:40.990 Host machine cpu family: x86_64 00:03:40.990 Host machine cpu: x86_64 00:03:40.990 Message: ## Building in Developer Mode ## 00:03:40.990 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:40.990 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:40.990 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:40.990 Program python3 found: YES (/usr/bin/python3) 00:03:40.990 Program cat found: YES (/usr/bin/cat) 00:03:40.990 Compiler for C supports arguments -march=native: YES 00:03:40.990 Checking for size of "void *" : 8 00:03:40.990 Checking for size of "void *" : 8 (cached) 00:03:40.990 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:40.990 Library m found: YES 00:03:40.990 Library numa found: YES 00:03:40.990 Has header "numaif.h" : YES 00:03:40.990 Library fdt found: NO 00:03:40.990 Library execinfo found: NO 00:03:40.990 Has header "execinfo.h" : YES 00:03:40.990 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.990 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:40.990 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:40.990 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:40.990 Run-time dependency openssl found: YES 3.1.1 00:03:40.990 Run-time dependency libpcap found: YES 1.10.4 00:03:40.990 Has header "pcap.h" with dependency libpcap: YES 00:03:40.990 Compiler for C supports arguments -Wcast-qual: YES 00:03:40.990 Compiler for C supports arguments -Wdeprecated: YES 00:03:40.990 Compiler for C supports arguments -Wformat: YES 00:03:40.990 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:40.990 Compiler for C supports arguments -Wformat-security: NO 00:03:40.990 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.990 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:40.990 Compiler for C supports arguments -Wnested-externs: YES 00:03:40.990 Compiler for C supports arguments -Wold-style-definition: YES 00:03:40.990 Compiler for C supports arguments -Wpointer-arith: YES 00:03:40.990 Compiler for C supports arguments -Wsign-compare: YES 00:03:40.990 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:40.990 Compiler for C supports arguments -Wundef: YES 00:03:40.990 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.990 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:40.990 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:40.990 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.990 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:40.990 Program objdump found: YES (/usr/bin/objdump) 00:03:40.990 Compiler for C supports arguments -mavx512f: YES 00:03:40.990 Checking if "AVX512 checking" compiles: YES 00:03:40.990 Fetching value of define "__SSE4_2__" : 1 00:03:40.990 Fetching value of define "__AES__" : 1 00:03:40.990 Fetching value of define "__AVX__" : 1 00:03:40.990 Fetching value of define "__AVX2__" : 1 00:03:40.990 Fetching value of define "__AVX512BW__" : 1 00:03:40.990 Fetching value of define "__AVX512CD__" : 1 00:03:40.990 Fetching value of define "__AVX512DQ__" : 1 00:03:40.990 Fetching value of define "__AVX512F__" : 1 00:03:40.990 Fetching value of define "__AVX512VL__" : 1 00:03:40.990 Fetching value of define "__PCLMUL__" : 1 00:03:40.990 Fetching value of define "__RDRND__" : 1 00:03:40.990 Fetching value of define "__RDSEED__" : 1 00:03:40.990 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:40.990 Fetching value of define "__znver1__" : (undefined) 00:03:40.990 Fetching value of define "__znver2__" : (undefined) 00:03:40.990 Fetching value of define "__znver3__" : (undefined) 00:03:40.990 Fetching value of define "__znver4__" : (undefined) 00:03:40.990 Library asan found: YES 00:03:40.990 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:40.990 Message: lib/log: Defining dependency "log" 00:03:40.990 Message: lib/kvargs: Defining dependency "kvargs" 00:03:40.990 Message: lib/telemetry: Defining dependency "telemetry" 00:03:40.990 Library rt found: YES 00:03:40.990 Checking for function "getentropy" : NO 00:03:40.990 Message: lib/eal: Defining dependency "eal" 00:03:40.990 Message: lib/ring: Defining dependency "ring" 00:03:40.990 Message: lib/rcu: Defining dependency "rcu" 00:03:40.990 Message: lib/mempool: Defining dependency "mempool" 00:03:40.990 Message: lib/mbuf: Defining dependency "mbuf" 00:03:40.990 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:40.990 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.990 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.990 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.990 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:40.990 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:40.990 Compiler for C supports arguments -mpclmul: YES 00:03:40.990 Compiler for C supports arguments -maes: YES 00:03:40.990 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:40.990 Compiler for C supports arguments -mavx512bw: YES 00:03:40.990 Compiler for C supports arguments -mavx512dq: YES 00:03:40.990 Compiler for C supports arguments -mavx512vl: YES 00:03:40.990 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:40.990 Compiler for C supports arguments -mavx2: YES 00:03:40.990 Compiler for C supports arguments -mavx: YES 00:03:40.990 Message: lib/net: Defining dependency "net" 00:03:40.990 Message: lib/meter: Defining dependency "meter" 00:03:40.990 Message: lib/ethdev: Defining dependency "ethdev" 00:03:40.990 Message: lib/pci: Defining dependency "pci" 00:03:40.990 Message: lib/cmdline: Defining dependency "cmdline" 00:03:40.990 Message: lib/hash: Defining dependency "hash" 00:03:40.990 Message: lib/timer: Defining dependency "timer" 00:03:40.990 Message: lib/compressdev: Defining dependency "compressdev" 00:03:40.990 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:40.990 Message: lib/dmadev: Defining dependency "dmadev" 00:03:40.990 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:40.990 Message: lib/power: Defining dependency "power" 00:03:40.990 Message: lib/reorder: Defining dependency "reorder" 00:03:40.990 Message: lib/security: Defining dependency "security" 00:03:40.990 Has header "linux/userfaultfd.h" : YES 00:03:40.990 Has header "linux/vduse.h" : YES 00:03:40.990 Message: lib/vhost: Defining dependency "vhost" 00:03:40.990 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:40.990 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:40.990 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:40.990 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:40.990 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:40.990 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:40.990 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:40.990 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:40.990 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:40.990 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:40.990 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:40.990 Configuring doxy-api-html.conf using configuration 00:03:40.990 Configuring doxy-api-man.conf using configuration 00:03:40.990 Program mandb found: YES (/usr/bin/mandb) 00:03:40.990 Program sphinx-build found: NO 00:03:40.990 Configuring rte_build_config.h using configuration 00:03:40.990 Message: 00:03:40.990 ================= 00:03:40.990 Applications Enabled 00:03:40.990 ================= 00:03:40.990 00:03:40.990 apps: 00:03:40.990 00:03:40.990 00:03:40.990 Message: 00:03:40.990 ================= 00:03:40.990 Libraries Enabled 00:03:40.990 ================= 00:03:40.990 00:03:40.990 libs: 00:03:40.990 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:40.990 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:40.990 cryptodev, dmadev, power, reorder, security, vhost, 00:03:40.990 00:03:40.990 Message: 00:03:40.990 =============== 00:03:40.990 Drivers Enabled 00:03:40.990 =============== 00:03:40.990 00:03:40.990 common: 00:03:40.990 00:03:40.990 bus: 00:03:40.990 pci, vdev, 00:03:40.990 mempool: 00:03:40.990 ring, 00:03:40.990 dma: 00:03:40.990 00:03:40.990 net: 00:03:40.990 00:03:40.990 crypto: 00:03:40.990 00:03:40.990 compress: 00:03:40.990 00:03:40.990 vdpa: 00:03:40.990 00:03:40.990 00:03:40.990 Message: 00:03:40.990 ================= 00:03:40.990 Content Skipped 00:03:40.990 ================= 00:03:40.990 00:03:40.990 apps: 00:03:40.990 dumpcap: explicitly disabled via build config 00:03:40.990 graph: explicitly disabled via build config 00:03:40.990 pdump: explicitly disabled via build config 00:03:40.990 proc-info: explicitly disabled via build config 00:03:40.990 test-acl: explicitly disabled via build config 00:03:40.990 test-bbdev: explicitly disabled via build config 00:03:40.990 test-cmdline: explicitly disabled via build config 00:03:40.990 test-compress-perf: explicitly disabled via build config 00:03:40.990 test-crypto-perf: explicitly disabled via build config 00:03:40.990 test-dma-perf: explicitly disabled via build config 00:03:40.990 test-eventdev: explicitly disabled via build config 00:03:40.990 test-fib: explicitly disabled via build config 00:03:40.990 test-flow-perf: explicitly disabled via build config 00:03:40.990 test-gpudev: explicitly disabled via build config 00:03:40.990 test-mldev: explicitly disabled via build config 00:03:40.990 test-pipeline: explicitly disabled via build config 00:03:40.990 test-pmd: explicitly disabled via build config 00:03:40.990 test-regex: explicitly disabled via build config 00:03:40.990 test-sad: explicitly disabled via build config 00:03:40.990 test-security-perf: explicitly disabled via build config 00:03:40.990 00:03:40.991 libs: 00:03:40.991 argparse: explicitly disabled via build config 00:03:40.991 metrics: explicitly disabled via build config 00:03:40.991 acl: explicitly disabled via build config 00:03:40.991 bbdev: explicitly disabled via build config 00:03:40.991 bitratestats: explicitly disabled via build config 00:03:40.991 bpf: explicitly disabled via build config 00:03:40.991 cfgfile: explicitly disabled via build config 00:03:40.991 distributor: explicitly disabled via build config 00:03:40.991 efd: explicitly disabled via build config 00:03:40.991 eventdev: explicitly disabled via build config 00:03:40.991 dispatcher: explicitly disabled via build config 00:03:40.991 gpudev: explicitly disabled via build config 00:03:40.991 gro: explicitly disabled via build config 00:03:40.991 gso: explicitly disabled via build config 00:03:40.991 ip_frag: explicitly disabled via build config 00:03:40.991 jobstats: explicitly disabled via build config 00:03:40.991 latencystats: explicitly disabled via build config 00:03:40.991 lpm: explicitly disabled via build config 00:03:40.991 member: explicitly disabled via build config 00:03:40.991 pcapng: explicitly disabled via build config 00:03:40.991 rawdev: explicitly disabled via build config 00:03:40.991 regexdev: explicitly disabled via build config 00:03:40.991 mldev: explicitly disabled via build config 00:03:40.991 rib: explicitly disabled via build config 00:03:40.991 sched: explicitly disabled via build config 00:03:40.991 stack: explicitly disabled via build config 00:03:40.991 ipsec: explicitly disabled via build config 00:03:40.991 pdcp: explicitly disabled via build config 00:03:40.991 fib: explicitly disabled via build config 00:03:40.991 port: explicitly disabled via build config 00:03:40.991 pdump: explicitly disabled via build config 00:03:40.991 table: explicitly disabled via build config 00:03:40.991 pipeline: explicitly disabled via build config 00:03:40.991 graph: explicitly disabled via build config 00:03:40.991 node: explicitly disabled via build config 00:03:40.991 00:03:40.991 drivers: 00:03:40.991 common/cpt: not in enabled drivers build config 00:03:40.991 common/dpaax: not in enabled drivers build config 00:03:40.991 common/iavf: not in enabled drivers build config 00:03:40.991 common/idpf: not in enabled drivers build config 00:03:40.991 common/ionic: not in enabled drivers build config 00:03:40.991 common/mvep: not in enabled drivers build config 00:03:40.991 common/octeontx: not in enabled drivers build config 00:03:40.991 bus/auxiliary: not in enabled drivers build config 00:03:40.991 bus/cdx: not in enabled drivers build config 00:03:40.991 bus/dpaa: not in enabled drivers build config 00:03:40.991 bus/fslmc: not in enabled drivers build config 00:03:40.991 bus/ifpga: not in enabled drivers build config 00:03:40.991 bus/platform: not in enabled drivers build config 00:03:40.991 bus/uacce: not in enabled drivers build config 00:03:40.991 bus/vmbus: not in enabled drivers build config 00:03:40.991 common/cnxk: not in enabled drivers build config 00:03:40.991 common/mlx5: not in enabled drivers build config 00:03:40.991 common/nfp: not in enabled drivers build config 00:03:40.991 common/nitrox: not in enabled drivers build config 00:03:40.991 common/qat: not in enabled drivers build config 00:03:40.991 common/sfc_efx: not in enabled drivers build config 00:03:40.991 mempool/bucket: not in enabled drivers build config 00:03:40.991 mempool/cnxk: not in enabled drivers build config 00:03:40.991 mempool/dpaa: not in enabled drivers build config 00:03:40.991 mempool/dpaa2: not in enabled drivers build config 00:03:40.991 mempool/octeontx: not in enabled drivers build config 00:03:40.991 mempool/stack: not in enabled drivers build config 00:03:40.991 dma/cnxk: not in enabled drivers build config 00:03:40.991 dma/dpaa: not in enabled drivers build config 00:03:40.991 dma/dpaa2: not in enabled drivers build config 00:03:40.991 dma/hisilicon: not in enabled drivers build config 00:03:40.991 dma/idxd: not in enabled drivers build config 00:03:40.991 dma/ioat: not in enabled drivers build config 00:03:40.991 dma/skeleton: not in enabled drivers build config 00:03:40.991 net/af_packet: not in enabled drivers build config 00:03:40.991 net/af_xdp: not in enabled drivers build config 00:03:40.991 net/ark: not in enabled drivers build config 00:03:40.991 net/atlantic: not in enabled drivers build config 00:03:40.991 net/avp: not in enabled drivers build config 00:03:40.991 net/axgbe: not in enabled drivers build config 00:03:40.991 net/bnx2x: not in enabled drivers build config 00:03:40.991 net/bnxt: not in enabled drivers build config 00:03:40.991 net/bonding: not in enabled drivers build config 00:03:40.991 net/cnxk: not in enabled drivers build config 00:03:40.991 net/cpfl: not in enabled drivers build config 00:03:40.991 net/cxgbe: not in enabled drivers build config 00:03:40.991 net/dpaa: not in enabled drivers build config 00:03:40.991 net/dpaa2: not in enabled drivers build config 00:03:40.991 net/e1000: not in enabled drivers build config 00:03:40.991 net/ena: not in enabled drivers build config 00:03:40.991 net/enetc: not in enabled drivers build config 00:03:40.991 net/enetfec: not in enabled drivers build config 00:03:40.991 net/enic: not in enabled drivers build config 00:03:40.991 net/failsafe: not in enabled drivers build config 00:03:40.991 net/fm10k: not in enabled drivers build config 00:03:40.991 net/gve: not in enabled drivers build config 00:03:40.991 net/hinic: not in enabled drivers build config 00:03:40.991 net/hns3: not in enabled drivers build config 00:03:40.991 net/i40e: not in enabled drivers build config 00:03:40.991 net/iavf: not in enabled drivers build config 00:03:40.991 net/ice: not in enabled drivers build config 00:03:40.991 net/idpf: not in enabled drivers build config 00:03:40.991 net/igc: not in enabled drivers build config 00:03:40.991 net/ionic: not in enabled drivers build config 00:03:40.991 net/ipn3ke: not in enabled drivers build config 00:03:40.991 net/ixgbe: not in enabled drivers build config 00:03:40.991 net/mana: not in enabled drivers build config 00:03:40.991 net/memif: not in enabled drivers build config 00:03:40.991 net/mlx4: not in enabled drivers build config 00:03:40.991 net/mlx5: not in enabled drivers build config 00:03:40.991 net/mvneta: not in enabled drivers build config 00:03:40.991 net/mvpp2: not in enabled drivers build config 00:03:40.991 net/netvsc: not in enabled drivers build config 00:03:40.991 net/nfb: not in enabled drivers build config 00:03:40.991 net/nfp: not in enabled drivers build config 00:03:40.991 net/ngbe: not in enabled drivers build config 00:03:40.991 net/null: not in enabled drivers build config 00:03:40.991 net/octeontx: not in enabled drivers build config 00:03:40.991 net/octeon_ep: not in enabled drivers build config 00:03:40.991 net/pcap: not in enabled drivers build config 00:03:40.991 net/pfe: not in enabled drivers build config 00:03:40.991 net/qede: not in enabled drivers build config 00:03:40.991 net/ring: not in enabled drivers build config 00:03:40.991 net/sfc: not in enabled drivers build config 00:03:40.991 net/softnic: not in enabled drivers build config 00:03:40.991 net/tap: not in enabled drivers build config 00:03:40.991 net/thunderx: not in enabled drivers build config 00:03:40.991 net/txgbe: not in enabled drivers build config 00:03:40.991 net/vdev_netvsc: not in enabled drivers build config 00:03:40.991 net/vhost: not in enabled drivers build config 00:03:40.991 net/virtio: not in enabled drivers build config 00:03:40.991 net/vmxnet3: not in enabled drivers build config 00:03:40.991 raw/*: missing internal dependency, "rawdev" 00:03:40.991 crypto/armv8: not in enabled drivers build config 00:03:40.991 crypto/bcmfs: not in enabled drivers build config 00:03:40.991 crypto/caam_jr: not in enabled drivers build config 00:03:40.991 crypto/ccp: not in enabled drivers build config 00:03:40.991 crypto/cnxk: not in enabled drivers build config 00:03:40.991 crypto/dpaa_sec: not in enabled drivers build config 00:03:40.991 crypto/dpaa2_sec: not in enabled drivers build config 00:03:40.991 crypto/ipsec_mb: not in enabled drivers build config 00:03:40.991 crypto/mlx5: not in enabled drivers build config 00:03:40.991 crypto/mvsam: not in enabled drivers build config 00:03:40.991 crypto/nitrox: not in enabled drivers build config 00:03:40.991 crypto/null: not in enabled drivers build config 00:03:40.991 crypto/octeontx: not in enabled drivers build config 00:03:40.991 crypto/openssl: not in enabled drivers build config 00:03:40.991 crypto/scheduler: not in enabled drivers build config 00:03:40.991 crypto/uadk: not in enabled drivers build config 00:03:40.991 crypto/virtio: not in enabled drivers build config 00:03:40.991 compress/isal: not in enabled drivers build config 00:03:40.991 compress/mlx5: not in enabled drivers build config 00:03:40.991 compress/nitrox: not in enabled drivers build config 00:03:40.991 compress/octeontx: not in enabled drivers build config 00:03:40.991 compress/zlib: not in enabled drivers build config 00:03:40.991 regex/*: missing internal dependency, "regexdev" 00:03:40.991 ml/*: missing internal dependency, "mldev" 00:03:40.991 vdpa/ifc: not in enabled drivers build config 00:03:40.991 vdpa/mlx5: not in enabled drivers build config 00:03:40.991 vdpa/nfp: not in enabled drivers build config 00:03:40.991 vdpa/sfc: not in enabled drivers build config 00:03:40.991 event/*: missing internal dependency, "eventdev" 00:03:40.991 baseband/*: missing internal dependency, "bbdev" 00:03:40.991 gpu/*: missing internal dependency, "gpudev" 00:03:40.991 00:03:40.991 00:03:40.991 Build targets in project: 85 00:03:40.991 00:03:40.991 DPDK 24.03.0 00:03:40.991 00:03:40.991 User defined options 00:03:40.991 buildtype : debug 00:03:40.991 default_library : shared 00:03:40.991 libdir : lib 00:03:40.991 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:40.991 b_sanitize : address 00:03:40.991 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:40.991 c_link_args : 00:03:40.991 cpu_instruction_set: native 00:03:40.991 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:40.991 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:40.991 enable_docs : false 00:03:40.991 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:40.991 enable_kmods : false 00:03:40.992 max_lcores : 128 00:03:40.992 tests : false 00:03:40.992 00:03:40.992 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.992 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:40.992 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:40.992 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:40.992 [3/268] Linking static target lib/librte_kvargs.a 00:03:40.992 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:40.992 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:40.992 [6/268] Linking static target lib/librte_log.a 00:03:41.250 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:41.250 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:41.250 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:41.250 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:41.250 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:41.250 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:41.250 [13/268] Linking static target lib/librte_telemetry.a 00:03:41.250 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:41.250 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:41.250 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.250 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:41.508 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:41.768 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:41.768 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.768 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.044 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.044 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.044 [24/268] Linking target lib/librte_log.so.24.1 00:03:42.044 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.044 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.044 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.044 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:42.044 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:42.044 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.044 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:42.302 [32/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:42.302 [33/268] Linking target lib/librte_kvargs.so.24.1 00:03:42.559 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:42.559 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:42.559 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:42.559 [37/268] Linking target lib/librte_telemetry.so.24.1 00:03:42.559 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:42.559 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:42.559 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:42.559 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:42.559 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:42.559 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:42.559 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:42.816 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:42.816 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:42.816 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:42.816 [48/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:43.074 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:43.074 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:43.074 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:43.332 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:43.332 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:43.332 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:43.332 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:43.332 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:43.332 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:43.332 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:43.332 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:43.590 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:43.590 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:43.590 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:43.590 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:43.848 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:43.848 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:43.848 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:43.848 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:44.107 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:44.107 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:44.107 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:44.107 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:44.364 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:44.364 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:44.364 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:44.364 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:44.364 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:44.364 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:44.364 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:44.364 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:44.622 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:44.622 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:44.622 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:44.622 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:44.622 [84/268] Linking static target lib/librte_ring.a 00:03:44.622 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:44.622 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:44.881 [87/268] Linking static target lib/librte_eal.a 00:03:44.881 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:44.881 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:45.140 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:45.140 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:45.140 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:45.140 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:45.140 [94/268] Linking static target lib/librte_mempool.a 00:03:45.140 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.399 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:45.399 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:45.399 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:45.399 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:45.659 [100/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:45.659 [101/268] Linking static target lib/librte_rcu.a 00:03:45.659 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:45.659 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:45.659 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:45.659 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:45.659 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:45.659 [107/268] Linking static target lib/librte_mbuf.a 00:03:45.659 [108/268] Linking static target lib/librte_meter.a 00:03:45.659 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:45.659 [110/268] Linking static target lib/librte_net.a 00:03:45.918 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:46.177 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.177 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:46.177 [114/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.177 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:46.177 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.436 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.436 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:46.436 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:46.695 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.695 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:46.695 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:46.954 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:46.954 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:47.213 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:47.213 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:47.213 [127/268] Linking static target lib/librte_pci.a 00:03:47.213 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:47.213 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:47.213 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:47.213 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:47.471 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:47.471 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:47.471 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:47.471 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:47.471 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:47.471 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.471 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:47.471 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:47.471 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:47.730 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:47.730 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:47.730 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:47.730 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:47.730 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:47.730 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:47.730 [147/268] Linking static target lib/librte_cmdline.a 00:03:47.989 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:47.989 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:47.989 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:48.247 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:48.247 [152/268] Linking static target lib/librte_timer.a 00:03:48.507 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:48.507 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:48.507 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:48.507 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:48.507 [157/268] Linking static target lib/librte_compressdev.a 00:03:48.507 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:48.765 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:48.765 [160/268] Linking static target lib/librte_hash.a 00:03:48.765 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:48.765 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:48.765 [163/268] Linking static target lib/librte_ethdev.a 00:03:49.024 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:49.024 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:49.024 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.024 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:49.024 [168/268] Linking static target lib/librte_dmadev.a 00:03:49.283 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:49.283 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:49.283 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.283 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:49.283 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:49.541 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.541 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:49.800 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:49.800 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:49.800 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:49.800 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:49.800 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:49.800 [181/268] Linking static target lib/librte_cryptodev.a 00:03:49.800 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.059 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.059 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:50.059 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:50.059 [186/268] Linking static target lib/librte_power.a 00:03:50.318 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:50.318 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:50.318 [189/268] Linking static target lib/librte_reorder.a 00:03:50.318 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:50.577 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:50.577 [192/268] Linking static target lib/librte_security.a 00:03:50.577 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:50.836 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.095 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:51.354 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.354 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.354 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:51.354 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:51.354 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:51.923 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:51.923 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:51.923 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:51.923 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:51.923 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:51.923 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:52.182 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:52.182 [208/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.182 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:52.182 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:52.182 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:52.442 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:52.442 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:52.442 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:52.442 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:52.442 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.442 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.442 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.699 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:52.699 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.699 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:52.700 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:52.700 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:52.700 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:52.700 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:52.957 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.215 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.783 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:57.082 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.082 [230/268] Linking target lib/librte_eal.so.24.1 00:03:57.082 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:57.082 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:57.082 [233/268] Linking target lib/librte_timer.so.24.1 00:03:57.082 [234/268] Linking target lib/librte_ring.so.24.1 00:03:57.082 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:57.082 [236/268] Linking static target lib/librte_vhost.a 00:03:57.082 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:57.082 [238/268] Linking target lib/librte_pci.so.24.1 00:03:57.082 [239/268] Linking target lib/librte_meter.so.24.1 00:03:57.082 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:57.082 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:57.340 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:57.340 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:57.340 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:57.340 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:57.340 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:57.340 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:57.340 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:57.340 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:57.340 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:57.340 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:57.599 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:57.599 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:57.599 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:57.599 [255/268] Linking target lib/librte_net.so.24.1 00:03:57.599 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:57.599 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:57.599 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.599 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:57.857 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:57.857 [261/268] Linking target lib/librte_security.so.24.1 00:03:57.857 [262/268] Linking target lib/librte_hash.so.24.1 00:03:57.857 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:57.857 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:57.857 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:57.857 [266/268] Linking target lib/librte_power.so.24.1 00:03:59.231 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.231 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:59.231 INFO: autodetecting backend as ninja 00:03:59.231 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:17.313 CC lib/ut_mock/mock.o 00:04:17.313 CC lib/log/log.o 00:04:17.313 CC lib/log/log_flags.o 00:04:17.313 CC lib/ut/ut.o 00:04:17.313 CC lib/log/log_deprecated.o 00:04:17.313 LIB libspdk_ut_mock.a 00:04:17.313 LIB libspdk_ut.a 00:04:17.313 LIB libspdk_log.a 00:04:17.313 SO libspdk_ut.so.2.0 00:04:17.313 SO libspdk_ut_mock.so.6.0 00:04:17.313 SO libspdk_log.so.7.1 00:04:17.313 SYMLINK libspdk_ut.so 00:04:17.313 SYMLINK libspdk_ut_mock.so 00:04:17.313 SYMLINK libspdk_log.so 00:04:17.313 CC lib/util/base64.o 00:04:17.313 CC lib/util/bit_array.o 00:04:17.313 CC lib/util/cpuset.o 00:04:17.313 CC lib/util/crc16.o 00:04:17.313 CC lib/util/crc32c.o 00:04:17.313 CC lib/util/crc32.o 00:04:17.313 CC lib/dma/dma.o 00:04:17.313 CC lib/ioat/ioat.o 00:04:17.313 CXX lib/trace_parser/trace.o 00:04:17.313 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.313 CC lib/vfio_user/host/vfio_user.o 00:04:17.313 CC lib/util/crc32_ieee.o 00:04:17.313 CC lib/util/crc64.o 00:04:17.313 CC lib/util/dif.o 00:04:17.313 CC lib/util/fd.o 00:04:17.313 CC lib/util/fd_group.o 00:04:17.313 LIB libspdk_dma.a 00:04:17.313 SO libspdk_dma.so.5.0 00:04:17.313 CC lib/util/file.o 00:04:17.313 CC lib/util/hexlify.o 00:04:17.313 LIB libspdk_ioat.a 00:04:17.313 SYMLINK libspdk_dma.so 00:04:17.313 CC lib/util/iov.o 00:04:17.313 SO libspdk_ioat.so.7.0 00:04:17.313 CC lib/util/math.o 00:04:17.313 CC lib/util/net.o 00:04:17.313 LIB libspdk_vfio_user.a 00:04:17.313 SYMLINK libspdk_ioat.so 00:04:17.313 SO libspdk_vfio_user.so.5.0 00:04:17.313 CC lib/util/pipe.o 00:04:17.313 CC lib/util/strerror_tls.o 00:04:17.313 CC lib/util/string.o 00:04:17.313 SYMLINK libspdk_vfio_user.so 00:04:17.313 CC lib/util/uuid.o 00:04:17.313 CC lib/util/xor.o 00:04:17.313 CC lib/util/zipf.o 00:04:17.313 CC lib/util/md5.o 00:04:17.313 LIB libspdk_util.a 00:04:17.313 SO libspdk_util.so.10.1 00:04:17.313 LIB libspdk_trace_parser.a 00:04:17.313 SO libspdk_trace_parser.so.6.0 00:04:17.313 SYMLINK libspdk_util.so 00:04:17.313 SYMLINK libspdk_trace_parser.so 00:04:17.313 CC lib/idxd/idxd.o 00:04:17.313 CC lib/vmd/led.o 00:04:17.313 CC lib/vmd/vmd.o 00:04:17.313 CC lib/idxd/idxd_kernel.o 00:04:17.313 CC lib/idxd/idxd_user.o 00:04:17.313 CC lib/rdma_utils/rdma_utils.o 00:04:17.313 CC lib/env_dpdk/env.o 00:04:17.313 CC lib/env_dpdk/memory.o 00:04:17.313 CC lib/conf/conf.o 00:04:17.313 CC lib/json/json_parse.o 00:04:17.313 CC lib/env_dpdk/pci.o 00:04:17.313 CC lib/env_dpdk/init.o 00:04:17.313 LIB libspdk_conf.a 00:04:17.313 CC lib/env_dpdk/threads.o 00:04:17.313 SO libspdk_conf.so.6.0 00:04:17.313 LIB libspdk_rdma_utils.a 00:04:17.313 CC lib/json/json_util.o 00:04:17.313 SO libspdk_rdma_utils.so.1.0 00:04:17.313 SYMLINK libspdk_conf.so 00:04:17.313 CC lib/json/json_write.o 00:04:17.313 SYMLINK libspdk_rdma_utils.so 00:04:17.313 CC lib/env_dpdk/pci_ioat.o 00:04:17.313 CC lib/env_dpdk/pci_virtio.o 00:04:17.313 CC lib/env_dpdk/pci_vmd.o 00:04:17.313 CC lib/env_dpdk/pci_idxd.o 00:04:17.313 CC lib/env_dpdk/pci_event.o 00:04:17.313 CC lib/env_dpdk/sigbus_handler.o 00:04:17.313 CC lib/env_dpdk/pci_dpdk.o 00:04:17.313 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:17.313 LIB libspdk_json.a 00:04:17.313 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:17.313 SO libspdk_json.so.6.0 00:04:17.313 LIB libspdk_idxd.a 00:04:17.313 LIB libspdk_vmd.a 00:04:17.313 SYMLINK libspdk_json.so 00:04:17.313 SO libspdk_idxd.so.12.1 00:04:17.313 SO libspdk_vmd.so.6.0 00:04:17.313 SYMLINK libspdk_idxd.so 00:04:17.313 SYMLINK libspdk_vmd.so 00:04:17.572 CC lib/rdma_provider/common.o 00:04:17.572 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:17.572 CC lib/jsonrpc/jsonrpc_server.o 00:04:17.572 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:17.572 CC lib/jsonrpc/jsonrpc_client.o 00:04:17.572 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.831 LIB libspdk_rdma_provider.a 00:04:17.831 SO libspdk_rdma_provider.so.7.0 00:04:17.831 SYMLINK libspdk_rdma_provider.so 00:04:17.831 LIB libspdk_jsonrpc.a 00:04:18.090 SO libspdk_jsonrpc.so.6.0 00:04:18.090 SYMLINK libspdk_jsonrpc.so 00:04:18.090 LIB libspdk_env_dpdk.a 00:04:18.348 SO libspdk_env_dpdk.so.15.1 00:04:18.348 SYMLINK libspdk_env_dpdk.so 00:04:18.348 CC lib/rpc/rpc.o 00:04:18.606 LIB libspdk_rpc.a 00:04:18.606 SO libspdk_rpc.so.6.0 00:04:18.865 SYMLINK libspdk_rpc.so 00:04:19.123 CC lib/notify/notify.o 00:04:19.123 CC lib/notify/notify_rpc.o 00:04:19.123 CC lib/keyring/keyring.o 00:04:19.123 CC lib/keyring/keyring_rpc.o 00:04:19.123 CC lib/trace/trace.o 00:04:19.123 CC lib/trace/trace_flags.o 00:04:19.123 CC lib/trace/trace_rpc.o 00:04:19.381 LIB libspdk_notify.a 00:04:19.381 SO libspdk_notify.so.6.0 00:04:19.381 LIB libspdk_keyring.a 00:04:19.381 SYMLINK libspdk_notify.so 00:04:19.381 LIB libspdk_trace.a 00:04:19.381 SO libspdk_keyring.so.2.0 00:04:19.381 SO libspdk_trace.so.11.0 00:04:19.639 SYMLINK libspdk_keyring.so 00:04:19.640 SYMLINK libspdk_trace.so 00:04:19.898 CC lib/sock/sock.o 00:04:19.898 CC lib/sock/sock_rpc.o 00:04:19.898 CC lib/thread/thread.o 00:04:19.898 CC lib/thread/iobuf.o 00:04:20.466 LIB libspdk_sock.a 00:04:20.466 SO libspdk_sock.so.10.0 00:04:20.466 SYMLINK libspdk_sock.so 00:04:21.058 CC lib/nvme/nvme_fabric.o 00:04:21.058 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.058 CC lib/nvme/nvme_ctrlr.o 00:04:21.058 CC lib/nvme/nvme_pcie.o 00:04:21.058 CC lib/nvme/nvme_ns_cmd.o 00:04:21.058 CC lib/nvme/nvme_pcie_common.o 00:04:21.058 CC lib/nvme/nvme_ns.o 00:04:21.058 CC lib/nvme/nvme_qpair.o 00:04:21.058 CC lib/nvme/nvme.o 00:04:21.627 CC lib/nvme/nvme_quirks.o 00:04:21.627 CC lib/nvme/nvme_transport.o 00:04:21.627 LIB libspdk_thread.a 00:04:21.627 SO libspdk_thread.so.11.0 00:04:21.627 CC lib/nvme/nvme_discovery.o 00:04:21.627 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:21.627 SYMLINK libspdk_thread.so 00:04:21.627 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:21.886 CC lib/nvme/nvme_tcp.o 00:04:21.886 CC lib/nvme/nvme_opal.o 00:04:21.886 CC lib/nvme/nvme_io_msg.o 00:04:21.886 CC lib/nvme/nvme_poll_group.o 00:04:22.145 CC lib/nvme/nvme_zns.o 00:04:22.145 CC lib/nvme/nvme_stubs.o 00:04:22.145 CC lib/nvme/nvme_auth.o 00:04:22.404 CC lib/nvme/nvme_cuse.o 00:04:22.404 CC lib/accel/accel.o 00:04:22.404 CC lib/accel/accel_rpc.o 00:04:22.404 CC lib/accel/accel_sw.o 00:04:22.662 CC lib/nvme/nvme_rdma.o 00:04:22.662 CC lib/blob/blobstore.o 00:04:22.662 CC lib/init/json_config.o 00:04:22.920 CC lib/virtio/virtio.o 00:04:22.920 CC lib/fsdev/fsdev.o 00:04:22.920 CC lib/init/subsystem.o 00:04:23.178 CC lib/virtio/virtio_vhost_user.o 00:04:23.178 CC lib/virtio/virtio_vfio_user.o 00:04:23.178 CC lib/virtio/virtio_pci.o 00:04:23.178 CC lib/init/subsystem_rpc.o 00:04:23.436 CC lib/init/rpc.o 00:04:23.436 CC lib/blob/request.o 00:04:23.436 CC lib/blob/zeroes.o 00:04:23.436 CC lib/blob/blob_bs_dev.o 00:04:23.436 CC lib/fsdev/fsdev_io.o 00:04:23.436 LIB libspdk_virtio.a 00:04:23.436 LIB libspdk_init.a 00:04:23.436 CC lib/fsdev/fsdev_rpc.o 00:04:23.436 LIB libspdk_accel.a 00:04:23.436 SO libspdk_virtio.so.7.0 00:04:23.436 SO libspdk_init.so.6.0 00:04:23.436 SO libspdk_accel.so.16.0 00:04:23.695 SYMLINK libspdk_virtio.so 00:04:23.695 SYMLINK libspdk_init.so 00:04:23.695 SYMLINK libspdk_accel.so 00:04:23.954 LIB libspdk_fsdev.a 00:04:23.954 SO libspdk_fsdev.so.2.0 00:04:23.954 CC lib/event/app.o 00:04:23.954 CC lib/event/reactor.o 00:04:23.954 CC lib/event/log_rpc.o 00:04:23.954 CC lib/event/app_rpc.o 00:04:23.954 CC lib/event/scheduler_static.o 00:04:23.954 LIB libspdk_nvme.a 00:04:23.954 SYMLINK libspdk_fsdev.so 00:04:23.954 CC lib/bdev/bdev.o 00:04:23.954 CC lib/bdev/bdev_rpc.o 00:04:23.954 CC lib/bdev/bdev_zone.o 00:04:23.954 CC lib/bdev/part.o 00:04:24.212 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.212 SO libspdk_nvme.so.15.0 00:04:24.212 CC lib/bdev/scsi_nvme.o 00:04:24.471 LIB libspdk_event.a 00:04:24.471 SO libspdk_event.so.14.0 00:04:24.471 SYMLINK libspdk_nvme.so 00:04:24.471 SYMLINK libspdk_event.so 00:04:25.041 LIB libspdk_fuse_dispatcher.a 00:04:25.041 SO libspdk_fuse_dispatcher.so.1.0 00:04:25.041 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.420 LIB libspdk_blob.a 00:04:26.679 SO libspdk_blob.so.12.0 00:04:26.679 SYMLINK libspdk_blob.so 00:04:26.938 LIB libspdk_bdev.a 00:04:26.939 SO libspdk_bdev.so.17.0 00:04:26.939 CC lib/lvol/lvol.o 00:04:27.197 CC lib/blobfs/blobfs.o 00:04:27.197 CC lib/blobfs/tree.o 00:04:27.197 SYMLINK libspdk_bdev.so 00:04:27.457 CC lib/ublk/ublk_rpc.o 00:04:27.457 CC lib/ublk/ublk.o 00:04:27.457 CC lib/ftl/ftl_init.o 00:04:27.457 CC lib/ftl/ftl_core.o 00:04:27.457 CC lib/ftl/ftl_layout.o 00:04:27.457 CC lib/nbd/nbd.o 00:04:27.457 CC lib/scsi/dev.o 00:04:27.457 CC lib/nvmf/ctrlr.o 00:04:27.457 CC lib/scsi/lun.o 00:04:27.716 CC lib/scsi/port.o 00:04:27.716 CC lib/scsi/scsi.o 00:04:27.716 CC lib/scsi/scsi_bdev.o 00:04:27.716 CC lib/nbd/nbd_rpc.o 00:04:27.716 CC lib/scsi/scsi_pr.o 00:04:27.716 CC lib/ftl/ftl_debug.o 00:04:27.716 CC lib/scsi/scsi_rpc.o 00:04:27.716 CC lib/scsi/task.o 00:04:27.976 LIB libspdk_nbd.a 00:04:27.976 LIB libspdk_blobfs.a 00:04:27.976 SO libspdk_nbd.so.7.0 00:04:27.976 SO libspdk_blobfs.so.11.0 00:04:27.976 CC lib/ftl/ftl_io.o 00:04:27.976 SYMLINK libspdk_nbd.so 00:04:27.976 CC lib/nvmf/ctrlr_discovery.o 00:04:27.976 CC lib/ftl/ftl_sb.o 00:04:27.976 LIB libspdk_ublk.a 00:04:27.976 CC lib/ftl/ftl_l2p.o 00:04:27.976 SYMLINK libspdk_blobfs.so 00:04:27.976 CC lib/nvmf/ctrlr_bdev.o 00:04:27.976 SO libspdk_ublk.so.3.0 00:04:27.976 LIB libspdk_lvol.a 00:04:28.235 SO libspdk_lvol.so.11.0 00:04:28.235 SYMLINK libspdk_ublk.so 00:04:28.235 CC lib/ftl/ftl_l2p_flat.o 00:04:28.235 CC lib/nvmf/subsystem.o 00:04:28.235 SYMLINK libspdk_lvol.so 00:04:28.235 CC lib/ftl/ftl_nv_cache.o 00:04:28.235 CC lib/ftl/ftl_band.o 00:04:28.235 LIB libspdk_scsi.a 00:04:28.235 CC lib/ftl/ftl_band_ops.o 00:04:28.235 CC lib/ftl/ftl_writer.o 00:04:28.235 SO libspdk_scsi.so.9.0 00:04:28.235 CC lib/ftl/ftl_rq.o 00:04:28.494 SYMLINK libspdk_scsi.so 00:04:28.494 CC lib/ftl/ftl_reloc.o 00:04:28.494 CC lib/nvmf/nvmf.o 00:04:28.494 CC lib/ftl/ftl_l2p_cache.o 00:04:28.494 CC lib/ftl/ftl_p2l.o 00:04:28.753 CC lib/iscsi/conn.o 00:04:28.753 CC lib/vhost/vhost.o 00:04:28.753 CC lib/vhost/vhost_rpc.o 00:04:28.753 CC lib/vhost/vhost_scsi.o 00:04:29.013 CC lib/vhost/vhost_blk.o 00:04:29.013 CC lib/vhost/rte_vhost_user.o 00:04:29.273 CC lib/iscsi/init_grp.o 00:04:29.273 CC lib/iscsi/iscsi.o 00:04:29.533 CC lib/ftl/ftl_p2l_log.o 00:04:29.533 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.533 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.533 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.533 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.533 CC lib/nvmf/nvmf_rpc.o 00:04:29.533 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.792 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.792 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.792 CC lib/iscsi/param.o 00:04:29.792 CC lib/iscsi/portal_grp.o 00:04:29.792 CC lib/iscsi/tgt_node.o 00:04:29.792 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.792 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:30.051 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:30.051 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.051 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.051 CC lib/iscsi/iscsi_subsystem.o 00:04:30.051 CC lib/iscsi/iscsi_rpc.o 00:04:30.051 LIB libspdk_vhost.a 00:04:30.310 CC lib/iscsi/task.o 00:04:30.310 CC lib/nvmf/transport.o 00:04:30.310 SO libspdk_vhost.so.8.0 00:04:30.310 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.310 CC lib/ftl/utils/ftl_conf.o 00:04:30.310 SYMLINK libspdk_vhost.so 00:04:30.310 CC lib/ftl/utils/ftl_md.o 00:04:30.310 CC lib/ftl/utils/ftl_mempool.o 00:04:30.310 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.568 CC lib/ftl/utils/ftl_property.o 00:04:30.568 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.568 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.568 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.568 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.568 CC lib/nvmf/tcp.o 00:04:30.568 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.827 CC lib/nvmf/stubs.o 00:04:30.827 CC lib/nvmf/mdns_server.o 00:04:30.827 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.827 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.827 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.827 CC lib/nvmf/rdma.o 00:04:30.827 LIB libspdk_iscsi.a 00:04:30.827 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.827 SO libspdk_iscsi.so.8.0 00:04:31.086 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:31.086 CC lib/nvmf/auth.o 00:04:31.086 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:31.086 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:31.086 SYMLINK libspdk_iscsi.so 00:04:31.086 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:31.086 CC lib/ftl/base/ftl_base_dev.o 00:04:31.086 CC lib/ftl/base/ftl_base_bdev.o 00:04:31.086 CC lib/ftl/ftl_trace.o 00:04:31.345 LIB libspdk_ftl.a 00:04:31.605 SO libspdk_ftl.so.9.0 00:04:32.173 SYMLINK libspdk_ftl.so 00:04:33.111 LIB libspdk_nvmf.a 00:04:33.371 SO libspdk_nvmf.so.20.0 00:04:33.629 SYMLINK libspdk_nvmf.so 00:04:33.888 CC module/env_dpdk/env_dpdk_rpc.o 00:04:34.146 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:34.146 CC module/scheduler/gscheduler/gscheduler.o 00:04:34.146 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:34.146 CC module/sock/posix/posix.o 00:04:34.146 CC module/fsdev/aio/fsdev_aio.o 00:04:34.146 CC module/keyring/file/keyring.o 00:04:34.146 CC module/blob/bdev/blob_bdev.o 00:04:34.146 CC module/keyring/linux/keyring.o 00:04:34.146 CC module/accel/error/accel_error.o 00:04:34.146 LIB libspdk_env_dpdk_rpc.a 00:04:34.146 SO libspdk_env_dpdk_rpc.so.6.0 00:04:34.146 SYMLINK libspdk_env_dpdk_rpc.so 00:04:34.146 CC module/accel/error/accel_error_rpc.o 00:04:34.146 LIB libspdk_scheduler_gscheduler.a 00:04:34.146 CC module/keyring/file/keyring_rpc.o 00:04:34.146 CC module/keyring/linux/keyring_rpc.o 00:04:34.146 LIB libspdk_scheduler_dpdk_governor.a 00:04:34.146 SO libspdk_scheduler_gscheduler.so.4.0 00:04:34.405 LIB libspdk_scheduler_dynamic.a 00:04:34.405 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:34.405 SO libspdk_scheduler_dynamic.so.4.0 00:04:34.405 SYMLINK libspdk_scheduler_gscheduler.so 00:04:34.405 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:34.405 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:34.405 CC module/fsdev/aio/linux_aio_mgr.o 00:04:34.405 LIB libspdk_accel_error.a 00:04:34.405 SYMLINK libspdk_scheduler_dynamic.so 00:04:34.405 LIB libspdk_keyring_linux.a 00:04:34.405 LIB libspdk_blob_bdev.a 00:04:34.405 LIB libspdk_keyring_file.a 00:04:34.405 SO libspdk_accel_error.so.2.0 00:04:34.405 SO libspdk_keyring_linux.so.1.0 00:04:34.405 SO libspdk_blob_bdev.so.12.0 00:04:34.405 SO libspdk_keyring_file.so.2.0 00:04:34.405 SYMLINK libspdk_accel_error.so 00:04:34.405 SYMLINK libspdk_blob_bdev.so 00:04:34.405 SYMLINK libspdk_keyring_linux.so 00:04:34.405 SYMLINK libspdk_keyring_file.so 00:04:34.405 CC module/accel/ioat/accel_ioat_rpc.o 00:04:34.405 CC module/accel/ioat/accel_ioat.o 00:04:34.664 CC module/accel/dsa/accel_dsa.o 00:04:34.664 CC module/accel/dsa/accel_dsa_rpc.o 00:04:34.664 CC module/accel/iaa/accel_iaa.o 00:04:34.664 LIB libspdk_accel_ioat.a 00:04:34.664 CC module/accel/iaa/accel_iaa_rpc.o 00:04:34.664 SO libspdk_accel_ioat.so.6.0 00:04:34.664 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.664 CC module/bdev/delay/vbdev_delay.o 00:04:34.664 CC module/bdev/error/vbdev_error.o 00:04:34.922 SYMLINK libspdk_accel_ioat.so 00:04:34.922 CC module/bdev/gpt/gpt.o 00:04:34.922 LIB libspdk_fsdev_aio.a 00:04:34.922 LIB libspdk_accel_dsa.a 00:04:34.922 LIB libspdk_accel_iaa.a 00:04:34.922 SO libspdk_accel_dsa.so.5.0 00:04:34.922 SO libspdk_fsdev_aio.so.1.0 00:04:34.922 LIB libspdk_sock_posix.a 00:04:34.922 SO libspdk_accel_iaa.so.3.0 00:04:34.922 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.922 SO libspdk_sock_posix.so.6.0 00:04:34.922 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.922 SYMLINK libspdk_accel_dsa.so 00:04:34.922 SYMLINK libspdk_fsdev_aio.so 00:04:34.922 SYMLINK libspdk_accel_iaa.so 00:04:34.922 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:34.922 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.922 CC module/bdev/malloc/bdev_malloc.o 00:04:34.922 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.922 SYMLINK libspdk_sock_posix.so 00:04:34.922 CC module/bdev/error/vbdev_error_rpc.o 00:04:35.180 LIB libspdk_blobfs_bdev.a 00:04:35.180 SO libspdk_blobfs_bdev.so.6.0 00:04:35.180 LIB libspdk_bdev_delay.a 00:04:35.180 SO libspdk_bdev_delay.so.6.0 00:04:35.180 CC module/bdev/null/bdev_null.o 00:04:35.180 LIB libspdk_bdev_error.a 00:04:35.180 SYMLINK libspdk_blobfs_bdev.so 00:04:35.180 CC module/bdev/null/bdev_null_rpc.o 00:04:35.180 CC module/bdev/nvme/bdev_nvme.o 00:04:35.180 SO libspdk_bdev_error.so.6.0 00:04:35.180 SYMLINK libspdk_bdev_delay.so 00:04:35.180 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:35.438 LIB libspdk_bdev_gpt.a 00:04:35.438 SYMLINK libspdk_bdev_error.so 00:04:35.438 CC module/bdev/passthru/vbdev_passthru.o 00:04:35.438 SO libspdk_bdev_gpt.so.6.0 00:04:35.438 SYMLINK libspdk_bdev_gpt.so 00:04:35.438 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:35.438 LIB libspdk_bdev_null.a 00:04:35.438 CC module/bdev/raid/bdev_raid.o 00:04:35.438 LIB libspdk_bdev_lvol.a 00:04:35.438 SO libspdk_bdev_null.so.6.0 00:04:35.438 CC module/bdev/split/vbdev_split.o 00:04:35.696 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:35.696 SO libspdk_bdev_lvol.so.6.0 00:04:35.696 SYMLINK libspdk_bdev_null.so 00:04:35.696 CC module/bdev/nvme/nvme_rpc.o 00:04:35.696 CC module/bdev/xnvme/bdev_xnvme.o 00:04:35.696 LIB libspdk_bdev_malloc.a 00:04:35.696 SYMLINK libspdk_bdev_lvol.so 00:04:35.696 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:35.696 CC module/bdev/nvme/bdev_mdns_client.o 00:04:35.696 SO libspdk_bdev_malloc.so.6.0 00:04:35.696 SYMLINK libspdk_bdev_malloc.so 00:04:35.696 CC module/bdev/nvme/vbdev_opal.o 00:04:35.696 CC module/bdev/split/vbdev_split_rpc.o 00:04:35.696 LIB libspdk_bdev_passthru.a 00:04:35.954 SO libspdk_bdev_passthru.so.6.0 00:04:35.954 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:35.954 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:35.954 SYMLINK libspdk_bdev_passthru.so 00:04:35.954 CC module/bdev/aio/bdev_aio.o 00:04:35.954 LIB libspdk_bdev_split.a 00:04:35.954 CC module/bdev/ftl/bdev_ftl.o 00:04:35.954 SO libspdk_bdev_split.so.6.0 00:04:35.954 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.212 LIB libspdk_bdev_xnvme.a 00:04:36.212 CC module/bdev/iscsi/bdev_iscsi.o 00:04:36.212 SYMLINK libspdk_bdev_split.so 00:04:36.212 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.212 LIB libspdk_bdev_zone_block.a 00:04:36.212 SO libspdk_bdev_zone_block.so.6.0 00:04:36.212 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:36.212 SO libspdk_bdev_xnvme.so.3.0 00:04:36.212 SYMLINK libspdk_bdev_zone_block.so 00:04:36.212 SYMLINK libspdk_bdev_xnvme.so 00:04:36.212 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.212 CC module/bdev/raid/raid0.o 00:04:36.212 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:36.212 CC module/bdev/raid/raid1.o 00:04:36.471 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:36.471 CC module/bdev/aio/bdev_aio_rpc.o 00:04:36.471 CC module/bdev/raid/concat.o 00:04:36.471 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:36.471 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:36.471 LIB libspdk_bdev_iscsi.a 00:04:36.471 SO libspdk_bdev_iscsi.so.6.0 00:04:36.471 LIB libspdk_bdev_aio.a 00:04:36.471 LIB libspdk_bdev_ftl.a 00:04:36.471 SO libspdk_bdev_aio.so.6.0 00:04:36.471 SO libspdk_bdev_ftl.so.6.0 00:04:36.729 SYMLINK libspdk_bdev_iscsi.so 00:04:36.729 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.729 SYMLINK libspdk_bdev_aio.so 00:04:36.729 SYMLINK libspdk_bdev_ftl.so 00:04:36.729 LIB libspdk_bdev_raid.a 00:04:36.729 LIB libspdk_bdev_virtio.a 00:04:36.729 SO libspdk_bdev_raid.so.6.0 00:04:36.729 SO libspdk_bdev_virtio.so.6.0 00:04:36.987 SYMLINK libspdk_bdev_raid.so 00:04:36.987 SYMLINK libspdk_bdev_virtio.so 00:04:38.364 LIB libspdk_bdev_nvme.a 00:04:38.364 SO libspdk_bdev_nvme.so.7.1 00:04:38.364 SYMLINK libspdk_bdev_nvme.so 00:04:38.938 CC module/event/subsystems/iobuf/iobuf.o 00:04:38.938 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:38.938 CC module/event/subsystems/scheduler/scheduler.o 00:04:38.938 CC module/event/subsystems/fsdev/fsdev.o 00:04:38.938 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:38.938 CC module/event/subsystems/sock/sock.o 00:04:38.938 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:38.938 CC module/event/subsystems/vmd/vmd.o 00:04:38.938 CC module/event/subsystems/keyring/keyring.o 00:04:38.938 LIB libspdk_event_fsdev.a 00:04:38.938 LIB libspdk_event_keyring.a 00:04:38.938 LIB libspdk_event_sock.a 00:04:38.938 LIB libspdk_event_iobuf.a 00:04:38.938 LIB libspdk_event_vhost_blk.a 00:04:38.938 LIB libspdk_event_scheduler.a 00:04:38.938 LIB libspdk_event_vmd.a 00:04:38.938 SO libspdk_event_fsdev.so.1.0 00:04:38.938 SO libspdk_event_keyring.so.1.0 00:04:38.938 SO libspdk_event_sock.so.5.0 00:04:38.938 SO libspdk_event_vhost_blk.so.3.0 00:04:38.938 SO libspdk_event_iobuf.so.3.0 00:04:38.938 SO libspdk_event_scheduler.so.4.0 00:04:39.198 SO libspdk_event_vmd.so.6.0 00:04:39.198 SYMLINK libspdk_event_fsdev.so 00:04:39.198 SYMLINK libspdk_event_sock.so 00:04:39.198 SYMLINK libspdk_event_keyring.so 00:04:39.198 SYMLINK libspdk_event_vhost_blk.so 00:04:39.198 SYMLINK libspdk_event_iobuf.so 00:04:39.198 SYMLINK libspdk_event_scheduler.so 00:04:39.198 SYMLINK libspdk_event_vmd.so 00:04:39.457 CC module/event/subsystems/accel/accel.o 00:04:39.717 LIB libspdk_event_accel.a 00:04:39.717 SO libspdk_event_accel.so.6.0 00:04:39.717 SYMLINK libspdk_event_accel.so 00:04:40.284 CC module/event/subsystems/bdev/bdev.o 00:04:40.284 LIB libspdk_event_bdev.a 00:04:40.284 SO libspdk_event_bdev.so.6.0 00:04:40.544 SYMLINK libspdk_event_bdev.so 00:04:40.803 CC module/event/subsystems/scsi/scsi.o 00:04:40.803 CC module/event/subsystems/ublk/ublk.o 00:04:40.803 CC module/event/subsystems/nbd/nbd.o 00:04:40.803 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:40.803 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:41.062 LIB libspdk_event_ublk.a 00:04:41.062 LIB libspdk_event_nbd.a 00:04:41.062 LIB libspdk_event_scsi.a 00:04:41.062 SO libspdk_event_nbd.so.6.0 00:04:41.062 SO libspdk_event_ublk.so.3.0 00:04:41.062 SO libspdk_event_scsi.so.6.0 00:04:41.062 LIB libspdk_event_nvmf.a 00:04:41.062 SYMLINK libspdk_event_scsi.so 00:04:41.062 SYMLINK libspdk_event_nbd.so 00:04:41.062 SYMLINK libspdk_event_ublk.so 00:04:41.062 SO libspdk_event_nvmf.so.6.0 00:04:41.062 SYMLINK libspdk_event_nvmf.so 00:04:41.320 CC module/event/subsystems/iscsi/iscsi.o 00:04:41.320 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:41.578 LIB libspdk_event_vhost_scsi.a 00:04:41.578 LIB libspdk_event_iscsi.a 00:04:41.578 SO libspdk_event_vhost_scsi.so.3.0 00:04:41.578 SO libspdk_event_iscsi.so.6.0 00:04:41.578 SYMLINK libspdk_event_vhost_scsi.so 00:04:41.837 SYMLINK libspdk_event_iscsi.so 00:04:41.837 SO libspdk.so.6.0 00:04:41.837 SYMLINK libspdk.so 00:04:42.404 CC app/trace_record/trace_record.o 00:04:42.404 CXX app/trace/trace.o 00:04:42.404 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:42.404 CC app/iscsi_tgt/iscsi_tgt.o 00:04:42.404 CC app/nvmf_tgt/nvmf_main.o 00:04:42.404 CC examples/util/zipf/zipf.o 00:04:42.404 CC app/spdk_tgt/spdk_tgt.o 00:04:42.404 CC test/thread/poller_perf/poller_perf.o 00:04:42.405 CC examples/ioat/perf/perf.o 00:04:42.405 CC test/dma/test_dma/test_dma.o 00:04:42.405 LINK nvmf_tgt 00:04:42.405 LINK interrupt_tgt 00:04:42.405 LINK iscsi_tgt 00:04:42.405 LINK zipf 00:04:42.405 LINK poller_perf 00:04:42.405 LINK spdk_trace_record 00:04:42.405 LINK spdk_tgt 00:04:42.664 LINK ioat_perf 00:04:42.664 LINK spdk_trace 00:04:42.664 CC examples/ioat/verify/verify.o 00:04:42.664 TEST_HEADER include/spdk/accel.h 00:04:42.664 TEST_HEADER include/spdk/accel_module.h 00:04:42.664 TEST_HEADER include/spdk/assert.h 00:04:42.664 TEST_HEADER include/spdk/barrier.h 00:04:42.664 TEST_HEADER include/spdk/base64.h 00:04:42.664 TEST_HEADER include/spdk/bdev.h 00:04:42.664 TEST_HEADER include/spdk/bdev_module.h 00:04:42.664 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.664 TEST_HEADER include/spdk/bit_array.h 00:04:42.664 TEST_HEADER include/spdk/bit_pool.h 00:04:42.664 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.664 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.664 TEST_HEADER include/spdk/blobfs.h 00:04:42.664 TEST_HEADER include/spdk/blob.h 00:04:42.664 TEST_HEADER include/spdk/conf.h 00:04:42.664 TEST_HEADER include/spdk/config.h 00:04:42.664 CC app/spdk_lspci/spdk_lspci.o 00:04:42.664 TEST_HEADER include/spdk/cpuset.h 00:04:42.664 TEST_HEADER include/spdk/crc16.h 00:04:42.664 CC app/spdk_nvme_perf/perf.o 00:04:42.664 TEST_HEADER include/spdk/crc32.h 00:04:42.664 TEST_HEADER include/spdk/crc64.h 00:04:42.664 TEST_HEADER include/spdk/dif.h 00:04:42.664 TEST_HEADER include/spdk/dma.h 00:04:42.664 TEST_HEADER include/spdk/endian.h 00:04:42.664 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.923 TEST_HEADER include/spdk/env.h 00:04:42.923 TEST_HEADER include/spdk/event.h 00:04:42.923 TEST_HEADER include/spdk/fd_group.h 00:04:42.923 TEST_HEADER include/spdk/fd.h 00:04:42.923 TEST_HEADER include/spdk/file.h 00:04:42.923 TEST_HEADER include/spdk/fsdev.h 00:04:42.923 CC app/spdk_nvme_identify/identify.o 00:04:42.923 TEST_HEADER include/spdk/fsdev_module.h 00:04:42.923 TEST_HEADER include/spdk/ftl.h 00:04:42.923 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:42.923 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.923 TEST_HEADER include/spdk/hexlify.h 00:04:42.923 TEST_HEADER include/spdk/histogram_data.h 00:04:42.923 TEST_HEADER include/spdk/idxd.h 00:04:42.923 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.923 CC app/spdk_nvme_discover/discovery_aer.o 00:04:42.923 TEST_HEADER include/spdk/init.h 00:04:42.923 TEST_HEADER include/spdk/ioat.h 00:04:42.923 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.923 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.923 TEST_HEADER include/spdk/json.h 00:04:42.923 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.923 CC test/app/bdev_svc/bdev_svc.o 00:04:42.923 TEST_HEADER include/spdk/keyring.h 00:04:42.923 TEST_HEADER include/spdk/keyring_module.h 00:04:42.923 TEST_HEADER include/spdk/likely.h 00:04:42.923 TEST_HEADER include/spdk/log.h 00:04:42.923 TEST_HEADER include/spdk/lvol.h 00:04:42.923 TEST_HEADER include/spdk/md5.h 00:04:42.923 TEST_HEADER include/spdk/memory.h 00:04:42.923 TEST_HEADER include/spdk/mmio.h 00:04:42.923 LINK test_dma 00:04:42.923 TEST_HEADER include/spdk/nbd.h 00:04:42.923 TEST_HEADER include/spdk/net.h 00:04:42.923 TEST_HEADER include/spdk/notify.h 00:04:42.923 TEST_HEADER include/spdk/nvme.h 00:04:42.923 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.923 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.923 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.923 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.923 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.923 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.923 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.923 TEST_HEADER include/spdk/nvmf.h 00:04:42.923 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.923 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.923 TEST_HEADER include/spdk/opal.h 00:04:42.923 TEST_HEADER include/spdk/opal_spec.h 00:04:42.923 TEST_HEADER include/spdk/pci_ids.h 00:04:42.923 TEST_HEADER include/spdk/pipe.h 00:04:42.923 TEST_HEADER include/spdk/queue.h 00:04:42.923 TEST_HEADER include/spdk/reduce.h 00:04:42.923 TEST_HEADER include/spdk/rpc.h 00:04:42.923 TEST_HEADER include/spdk/scheduler.h 00:04:42.923 TEST_HEADER include/spdk/scsi.h 00:04:42.923 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.923 TEST_HEADER include/spdk/sock.h 00:04:42.923 LINK spdk_lspci 00:04:42.923 TEST_HEADER include/spdk/stdinc.h 00:04:42.923 TEST_HEADER include/spdk/string.h 00:04:42.923 TEST_HEADER include/spdk/thread.h 00:04:42.923 TEST_HEADER include/spdk/trace.h 00:04:42.923 TEST_HEADER include/spdk/trace_parser.h 00:04:42.923 TEST_HEADER include/spdk/tree.h 00:04:42.923 TEST_HEADER include/spdk/ublk.h 00:04:42.923 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.923 TEST_HEADER include/spdk/util.h 00:04:42.923 TEST_HEADER include/spdk/uuid.h 00:04:42.923 TEST_HEADER include/spdk/version.h 00:04:42.923 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.923 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.923 TEST_HEADER include/spdk/vhost.h 00:04:42.923 TEST_HEADER include/spdk/vmd.h 00:04:42.923 TEST_HEADER include/spdk/xor.h 00:04:42.923 TEST_HEADER include/spdk/zipf.h 00:04:42.923 CXX test/cpp_headers/accel.o 00:04:42.923 LINK verify 00:04:42.923 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.923 LINK bdev_svc 00:04:42.923 LINK spdk_nvme_discover 00:04:43.182 CXX test/cpp_headers/accel_module.o 00:04:43.182 CXX test/cpp_headers/assert.o 00:04:43.182 CXX test/cpp_headers/barrier.o 00:04:43.182 CC test/env/vtophys/vtophys.o 00:04:43.182 CC examples/sock/hello_world/hello_sock.o 00:04:43.182 CC examples/thread/thread/thread_ex.o 00:04:43.182 CC test/app/histogram_perf/histogram_perf.o 00:04:43.441 CC test/app/jsoncat/jsoncat.o 00:04:43.441 CXX test/cpp_headers/base64.o 00:04:43.441 LINK nvme_fuzz 00:04:43.441 LINK mem_callbacks 00:04:43.441 LINK vtophys 00:04:43.441 LINK histogram_perf 00:04:43.441 LINK jsoncat 00:04:43.441 CXX test/cpp_headers/bdev.o 00:04:43.441 LINK hello_sock 00:04:43.441 CXX test/cpp_headers/bdev_module.o 00:04:43.441 LINK thread 00:04:43.700 CXX test/cpp_headers/bdev_zone.o 00:04:43.700 CXX test/cpp_headers/bit_array.o 00:04:43.700 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:43.700 LINK spdk_nvme_perf 00:04:43.700 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:43.700 CXX test/cpp_headers/bit_pool.o 00:04:43.700 LINK spdk_nvme_identify 00:04:43.700 CC test/env/memory/memory_ut.o 00:04:43.700 CC test/env/pci/pci_ut.o 00:04:43.700 LINK env_dpdk_post_init 00:04:43.700 CXX test/cpp_headers/blob_bdev.o 00:04:43.960 CC test/app/stub/stub.o 00:04:43.960 CC examples/vmd/lsvmd/lsvmd.o 00:04:43.960 CC app/spdk_top/spdk_top.o 00:04:43.960 CC examples/idxd/perf/perf.o 00:04:43.960 CXX test/cpp_headers/blobfs_bdev.o 00:04:43.960 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:43.960 LINK stub 00:04:43.960 LINK lsvmd 00:04:43.960 CC examples/vmd/led/led.o 00:04:44.219 CXX test/cpp_headers/blobfs.o 00:04:44.219 CXX test/cpp_headers/blob.o 00:04:44.219 CXX test/cpp_headers/conf.o 00:04:44.219 LINK led 00:04:44.219 LINK pci_ut 00:04:44.219 LINK hello_fsdev 00:04:44.219 LINK idxd_perf 00:04:44.479 CXX test/cpp_headers/config.o 00:04:44.479 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:44.479 CXX test/cpp_headers/cpuset.o 00:04:44.479 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:44.479 CC test/event/event_perf/event_perf.o 00:04:44.479 CC test/nvme/aer/aer.o 00:04:44.479 CXX test/cpp_headers/crc16.o 00:04:44.738 CC test/event/reactor/reactor.o 00:04:44.738 CC examples/accel/perf/accel_perf.o 00:04:44.738 LINK event_perf 00:04:44.738 CC test/event/reactor_perf/reactor_perf.o 00:04:44.738 CXX test/cpp_headers/crc32.o 00:04:44.738 LINK reactor 00:04:44.996 LINK reactor_perf 00:04:44.996 LINK aer 00:04:44.996 LINK spdk_top 00:04:44.996 CC test/event/app_repeat/app_repeat.o 00:04:44.996 CXX test/cpp_headers/crc64.o 00:04:44.996 LINK vhost_fuzz 00:04:44.996 LINK memory_ut 00:04:44.996 CC test/event/scheduler/scheduler.o 00:04:44.996 LINK app_repeat 00:04:44.996 CXX test/cpp_headers/dif.o 00:04:45.254 CC test/nvme/reset/reset.o 00:04:45.254 CC app/vhost/vhost.o 00:04:45.254 CC app/spdk_dd/spdk_dd.o 00:04:45.254 LINK accel_perf 00:04:45.255 CC test/rpc_client/rpc_client_test.o 00:04:45.255 CXX test/cpp_headers/dma.o 00:04:45.255 CC examples/blob/hello_world/hello_blob.o 00:04:45.255 LINK scheduler 00:04:45.255 LINK vhost 00:04:45.512 CC examples/blob/cli/blobcli.o 00:04:45.512 CXX test/cpp_headers/endian.o 00:04:45.512 LINK rpc_client_test 00:04:45.512 LINK reset 00:04:45.512 CC test/nvme/sgl/sgl.o 00:04:45.512 LINK iscsi_fuzz 00:04:45.512 LINK hello_blob 00:04:45.512 LINK spdk_dd 00:04:45.512 CXX test/cpp_headers/env_dpdk.o 00:04:45.512 CXX test/cpp_headers/env.o 00:04:45.770 CC test/nvme/overhead/overhead.o 00:04:45.770 CC test/nvme/e2edp/nvme_dp.o 00:04:45.770 CC examples/nvme/hello_world/hello_world.o 00:04:45.770 CXX test/cpp_headers/event.o 00:04:45.770 LINK sgl 00:04:45.770 CC examples/nvme/reconnect/reconnect.o 00:04:45.770 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:45.770 CC test/nvme/err_injection/err_injection.o 00:04:45.770 LINK blobcli 00:04:46.028 LINK nvme_dp 00:04:46.028 LINK hello_world 00:04:46.028 CXX test/cpp_headers/fd_group.o 00:04:46.028 CC app/fio/nvme/fio_plugin.o 00:04:46.028 LINK overhead 00:04:46.028 LINK err_injection 00:04:46.028 CXX test/cpp_headers/fd.o 00:04:46.028 CXX test/cpp_headers/file.o 00:04:46.028 CC examples/bdev/hello_world/hello_bdev.o 00:04:46.286 LINK reconnect 00:04:46.286 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.286 CC app/fio/bdev/fio_plugin.o 00:04:46.286 CC test/nvme/startup/startup.o 00:04:46.286 CXX test/cpp_headers/fsdev.o 00:04:46.286 CC test/accel/dif/dif.o 00:04:46.286 CXX test/cpp_headers/fsdev_module.o 00:04:46.286 CC test/nvme/reserve/reserve.o 00:04:46.286 LINK nvme_manage 00:04:46.286 LINK hello_bdev 00:04:46.545 LINK startup 00:04:46.545 CXX test/cpp_headers/ftl.o 00:04:46.545 LINK spdk_nvme 00:04:46.545 LINK reserve 00:04:46.545 CXX test/cpp_headers/fuse_dispatcher.o 00:04:46.545 CC examples/nvme/arbitration/arbitration.o 00:04:46.545 CC test/blobfs/mkfs/mkfs.o 00:04:46.803 CC examples/nvme/hotplug/hotplug.o 00:04:46.803 CXX test/cpp_headers/gpt_spec.o 00:04:46.803 LINK spdk_bdev 00:04:46.803 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.803 CC test/nvme/simple_copy/simple_copy.o 00:04:46.803 CC test/lvol/esnap/esnap.o 00:04:46.803 LINK mkfs 00:04:47.061 CXX test/cpp_headers/hexlify.o 00:04:47.061 LINK arbitration 00:04:47.061 LINK cmb_copy 00:04:47.061 LINK hotplug 00:04:47.061 CC examples/nvme/abort/abort.o 00:04:47.061 LINK dif 00:04:47.061 LINK simple_copy 00:04:47.061 CXX test/cpp_headers/histogram_data.o 00:04:47.061 LINK bdevperf 00:04:47.061 CXX test/cpp_headers/idxd.o 00:04:47.061 CXX test/cpp_headers/idxd_spec.o 00:04:47.061 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:47.321 CXX test/cpp_headers/init.o 00:04:47.321 CXX test/cpp_headers/ioat.o 00:04:47.321 LINK pmr_persistence 00:04:47.321 CC test/nvme/connect_stress/connect_stress.o 00:04:47.321 CC test/nvme/boot_partition/boot_partition.o 00:04:47.321 CC test/nvme/compliance/nvme_compliance.o 00:04:47.321 CXX test/cpp_headers/ioat_spec.o 00:04:47.321 CC test/nvme/fused_ordering/fused_ordering.o 00:04:47.321 LINK abort 00:04:47.598 CXX test/cpp_headers/iscsi_spec.o 00:04:47.598 CXX test/cpp_headers/json.o 00:04:47.598 CC test/bdev/bdevio/bdevio.o 00:04:47.598 LINK connect_stress 00:04:47.598 LINK boot_partition 00:04:47.598 LINK fused_ordering 00:04:47.598 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:47.598 CXX test/cpp_headers/jsonrpc.o 00:04:47.598 CXX test/cpp_headers/keyring.o 00:04:47.598 CXX test/cpp_headers/keyring_module.o 00:04:47.861 LINK nvme_compliance 00:04:47.861 CC test/nvme/fdp/fdp.o 00:04:47.861 CC examples/nvmf/nvmf/nvmf.o 00:04:47.861 LINK doorbell_aers 00:04:47.861 CXX test/cpp_headers/likely.o 00:04:47.861 CXX test/cpp_headers/log.o 00:04:47.861 CXX test/cpp_headers/lvol.o 00:04:47.861 CC test/nvme/cuse/cuse.o 00:04:47.861 CXX test/cpp_headers/md5.o 00:04:47.861 LINK bdevio 00:04:48.120 CXX test/cpp_headers/memory.o 00:04:48.120 CXX test/cpp_headers/mmio.o 00:04:48.120 CXX test/cpp_headers/nbd.o 00:04:48.120 CXX test/cpp_headers/net.o 00:04:48.120 CXX test/cpp_headers/notify.o 00:04:48.120 LINK fdp 00:04:48.120 LINK nvmf 00:04:48.120 CXX test/cpp_headers/nvme.o 00:04:48.120 CXX test/cpp_headers/nvme_intel.o 00:04:48.120 CXX test/cpp_headers/nvme_ocssd.o 00:04:48.120 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:48.120 CXX test/cpp_headers/nvme_spec.o 00:04:48.120 CXX test/cpp_headers/nvme_zns.o 00:04:48.379 CXX test/cpp_headers/nvmf_cmd.o 00:04:48.379 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:48.379 CXX test/cpp_headers/nvmf.o 00:04:48.379 CXX test/cpp_headers/nvmf_spec.o 00:04:48.379 CXX test/cpp_headers/nvmf_transport.o 00:04:48.379 CXX test/cpp_headers/opal.o 00:04:48.379 CXX test/cpp_headers/opal_spec.o 00:04:48.379 CXX test/cpp_headers/pci_ids.o 00:04:48.379 CXX test/cpp_headers/pipe.o 00:04:48.379 CXX test/cpp_headers/queue.o 00:04:48.379 CXX test/cpp_headers/reduce.o 00:04:48.637 CXX test/cpp_headers/rpc.o 00:04:48.637 CXX test/cpp_headers/scheduler.o 00:04:48.637 CXX test/cpp_headers/scsi.o 00:04:48.637 CXX test/cpp_headers/scsi_spec.o 00:04:48.637 CXX test/cpp_headers/sock.o 00:04:48.637 CXX test/cpp_headers/stdinc.o 00:04:48.637 CXX test/cpp_headers/string.o 00:04:48.637 CXX test/cpp_headers/thread.o 00:04:48.637 CXX test/cpp_headers/trace.o 00:04:48.637 CXX test/cpp_headers/trace_parser.o 00:04:48.637 CXX test/cpp_headers/tree.o 00:04:48.637 CXX test/cpp_headers/ublk.o 00:04:48.637 CXX test/cpp_headers/util.o 00:04:48.637 CXX test/cpp_headers/uuid.o 00:04:48.895 CXX test/cpp_headers/version.o 00:04:48.896 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.896 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.896 CXX test/cpp_headers/vhost.o 00:04:48.896 CXX test/cpp_headers/vmd.o 00:04:48.896 CXX test/cpp_headers/xor.o 00:04:48.896 CXX test/cpp_headers/zipf.o 00:04:49.154 LINK cuse 00:04:53.342 LINK esnap 00:04:53.342 00:04:53.342 real 1m24.641s 00:04:53.342 user 7m10.617s 00:04:53.342 sys 1m51.997s 00:04:53.342 08:25:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:53.342 ************************************ 00:04:53.342 END TEST make 00:04:53.342 ************************************ 00:04:53.342 08:25:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:53.342 08:25:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:53.342 08:25:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:53.342 08:25:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:53.342 08:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.342 08:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:53.342 08:25:28 -- pm/common@44 -- $ pid=5302 00:04:53.342 08:25:28 -- pm/common@50 -- $ kill -TERM 5302 00:04:53.342 08:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.342 08:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:53.342 08:25:28 -- pm/common@44 -- $ pid=5303 00:04:53.342 08:25:28 -- pm/common@50 -- $ kill -TERM 5303 00:04:53.342 08:25:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:53.342 08:25:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:53.342 08:25:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.342 08:25:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.342 08:25:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.342 08:25:28 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.342 08:25:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.342 08:25:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.342 08:25:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.342 08:25:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.342 08:25:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.342 08:25:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.342 08:25:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.342 08:25:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.342 08:25:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.342 08:25:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.342 08:25:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.342 08:25:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:53.342 08:25:28 -- scripts/common.sh@345 -- # : 1 00:04:53.342 08:25:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.342 08:25:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.342 08:25:28 -- scripts/common.sh@365 -- # decimal 1 00:04:53.342 08:25:28 -- scripts/common.sh@353 -- # local d=1 00:04:53.342 08:25:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.342 08:25:28 -- scripts/common.sh@355 -- # echo 1 00:04:53.342 08:25:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.342 08:25:28 -- scripts/common.sh@366 -- # decimal 2 00:04:53.342 08:25:28 -- scripts/common.sh@353 -- # local d=2 00:04:53.342 08:25:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.342 08:25:28 -- scripts/common.sh@355 -- # echo 2 00:04:53.342 08:25:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.342 08:25:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.342 08:25:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.342 08:25:28 -- scripts/common.sh@368 -- # return 0 00:04:53.342 08:25:28 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.342 08:25:28 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 08:25:28 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 08:25:28 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 08:25:28 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 08:25:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.342 08:25:28 -- nvmf/common.sh@7 -- # uname -s 00:04:53.342 08:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.342 08:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.342 08:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.342 08:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.342 08:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.342 08:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.342 08:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.342 08:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.342 08:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.342 08:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.600 08:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4694c2a8-1ece-45b9-bcc1-53b11818720f 00:04:53.600 08:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4694c2a8-1ece-45b9-bcc1-53b11818720f 00:04:53.600 08:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.600 08:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.600 08:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.600 08:25:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.600 08:25:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.600 08:25:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.600 08:25:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.600 08:25:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.600 08:25:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.600 08:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.600 08:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.600 08:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.600 08:25:28 -- paths/export.sh@5 -- # export PATH 00:04:53.601 08:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.601 08:25:28 -- nvmf/common.sh@51 -- # : 0 00:04:53.601 08:25:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.601 08:25:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.601 08:25:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.601 08:25:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.601 08:25:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.601 08:25:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.601 08:25:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.601 08:25:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.601 08:25:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.601 08:25:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:53.601 08:25:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:53.601 08:25:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:53.601 08:25:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:53.601 08:25:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:53.601 08:25:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:53.601 08:25:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:53.601 08:25:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:53.601 08:25:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:53.601 08:25:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:53.601 08:25:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54761 00:04:53.601 08:25:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:53.601 08:25:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:53.601 08:25:28 -- pm/common@17 -- # local monitor 00:04:53.601 08:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.601 08:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.601 08:25:28 -- pm/common@21 -- # date +%s 00:04:53.601 08:25:28 -- pm/common@25 -- # sleep 1 00:04:53.601 08:25:28 -- pm/common@21 -- # date +%s 00:04:53.601 08:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732263928 00:04:53.601 08:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732263928 00:04:53.601 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732263928_collect-vmstat.pm.log 00:04:53.601 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732263928_collect-cpu-load.pm.log 00:04:54.536 08:25:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:54.536 08:25:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:54.536 08:25:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.536 08:25:29 -- common/autotest_common.sh@10 -- # set +x 00:04:54.536 08:25:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:54.536 08:25:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:54.536 08:25:29 -- common/autotest_common.sh@10 -- # set +x 00:04:54.536 08:25:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:54.536 08:25:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:54.536 08:25:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:54.536 08:25:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:54.536 08:25:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:54.536 08:25:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.536 08:25:29 -- common/autotest_common.sh@1457 -- # uname 00:04:54.793 08:25:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:54.793 08:25:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.793 08:25:29 -- common/autotest_common.sh@1477 -- # uname 00:04:54.793 08:25:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:54.793 08:25:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:54.794 08:25:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.794 lcov: LCOV version 1.15 00:04:54.794 08:25:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:09.677 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:09.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.598 08:25:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:24.598 08:25:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.598 08:25:59 -- common/autotest_common.sh@10 -- # set +x 00:05:24.598 08:25:59 -- spdk/autotest.sh@78 -- # rm -f 00:05:24.598 08:25:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.735 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:25.995 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:25.995 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:25.995 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:25.995 08:26:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:25.995 08:26:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:25.995 08:26:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:25.995 08:26:00 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.995 08:26:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:25.995 08:26:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:25.995 08:26:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.995 08:26:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:25.995 08:26:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.995 08:26:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.995 08:26:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:25.995 08:26:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:25.995 08:26:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:25.995 No valid GPT data, bailing 00:05:25.995 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.995 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:25.995 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:25.995 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:25.995 1+0 records in 00:05:25.995 1+0 records out 00:05:25.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191504 s, 54.8 MB/s 00:05:25.995 08:26:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.995 08:26:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.995 08:26:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:25.995 08:26:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:25.995 08:26:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:26.254 No valid GPT data, bailing 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:26.254 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:26.254 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:26.254 1+0 records in 00:05:26.254 1+0 records out 00:05:26.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614636 s, 171 MB/s 00:05:26.254 08:26:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.254 08:26:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.254 08:26:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:26.254 08:26:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:26.254 08:26:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:26.254 No valid GPT data, bailing 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:26.254 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:26.254 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:26.254 1+0 records in 00:05:26.254 1+0 records out 00:05:26.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380172 s, 276 MB/s 00:05:26.254 08:26:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.254 08:26:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.254 08:26:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:26.254 08:26:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:26.254 08:26:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:26.254 No valid GPT data, bailing 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:26.254 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:26.254 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:26.254 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:26.254 1+0 records in 00:05:26.254 1+0 records out 00:05:26.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065242 s, 161 MB/s 00:05:26.254 08:26:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.254 08:26:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.254 08:26:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:26.254 08:26:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:26.254 08:26:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:26.254 No valid GPT data, bailing 00:05:26.513 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:26.513 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:26.513 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:26.513 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:26.513 1+0 records in 00:05:26.513 1+0 records out 00:05:26.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00545723 s, 192 MB/s 00:05:26.513 08:26:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.513 08:26:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.513 08:26:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:26.513 08:26:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:26.513 08:26:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:26.513 No valid GPT data, bailing 00:05:26.513 08:26:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:26.513 08:26:01 -- scripts/common.sh@394 -- # pt= 00:05:26.513 08:26:01 -- scripts/common.sh@395 -- # return 1 00:05:26.513 08:26:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:26.513 1+0 records in 00:05:26.513 1+0 records out 00:05:26.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499808 s, 210 MB/s 00:05:26.513 08:26:01 -- spdk/autotest.sh@105 -- # sync 00:05:26.513 08:26:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:26.513 08:26:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:26.513 08:26:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:29.843 08:26:04 -- spdk/autotest.sh@111 -- # uname -s 00:05:29.843 08:26:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:29.843 08:26:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:29.843 08:26:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.981 Hugepages 00:05:30.981 node hugesize free / total 00:05:30.981 node0 1048576kB 0 / 0 00:05:30.981 node0 2048kB 0 / 0 00:05:30.981 00:05:30.981 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.981 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:30.981 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:31.240 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:31.240 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:31.499 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:31.499 08:26:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:31.499 08:26:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:31.499 08:26:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:31.499 08:26:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.006 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.006 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.006 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.006 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.006 08:26:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:34.385 08:26:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:34.385 08:26:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:34.385 08:26:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:34.385 08:26:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:34.385 08:26:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:34.385 08:26:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:34.385 08:26:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.385 08:26:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:34.385 08:26:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:34.385 08:26:09 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:34.385 08:26:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:34.385 08:26:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.213 Waiting for block devices as requested 00:05:35.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.213 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.472 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.754 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:40.754 08:26:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:40.754 08:26:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1543 -- # continue 00:05:40.754 08:26:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:40.754 08:26:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1543 -- # continue 00:05:40.754 08:26:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:40.754 08:26:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1543 -- # continue 00:05:40.754 08:26:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:40.754 08:26:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:40.754 08:26:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:40.754 08:26:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:40.754 08:26:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:40.754 08:26:15 -- common/autotest_common.sh@1543 -- # continue 00:05:40.754 08:26:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:40.754 08:26:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.754 08:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 08:26:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:40.754 08:26:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.754 08:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 08:26:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.261 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.261 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.261 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.520 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.520 08:26:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:42.520 08:26:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.520 08:26:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.520 08:26:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:42.520 08:26:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:42.520 08:26:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.520 08:26:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:42.520 08:26:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:42.520 08:26:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:42.520 08:26:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:42.520 08:26:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:42.520 08:26:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:42.520 08:26:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:42.520 08:26:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.520 08:26:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.520 08:26:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:42.780 08:26:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:42.780 08:26:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:42.780 08:26:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.780 08:26:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.780 08:26:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.780 08:26:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.780 08:26:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.780 08:26:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.780 08:26:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:42.780 08:26:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.780 08:26:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.780 08:26:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:42.780 08:26:17 -- common/autotest_common.sh@1572 -- # return 0 00:05:42.780 08:26:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:42.780 08:26:17 -- common/autotest_common.sh@1580 -- # return 0 00:05:42.780 08:26:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:42.780 08:26:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:42.780 08:26:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.780 08:26:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.780 08:26:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:42.780 08:26:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.780 08:26:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.780 08:26:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:42.780 08:26:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.780 08:26:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.780 08:26:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.780 08:26:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.780 ************************************ 00:05:42.780 START TEST env 00:05:42.780 ************************************ 00:05:42.780 08:26:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.040 * Looking for test storage... 00:05:43.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.040 08:26:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.040 08:26:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.040 08:26:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.040 08:26:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.040 08:26:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.040 08:26:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.040 08:26:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.040 08:26:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.040 08:26:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.040 08:26:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.040 08:26:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.040 08:26:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.040 08:26:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.040 08:26:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.040 08:26:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.040 08:26:17 env -- scripts/common.sh@344 -- # case "$op" in 00:05:43.040 08:26:17 env -- scripts/common.sh@345 -- # : 1 00:05:43.040 08:26:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.040 08:26:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.040 08:26:17 env -- scripts/common.sh@365 -- # decimal 1 00:05:43.040 08:26:17 env -- scripts/common.sh@353 -- # local d=1 00:05:43.040 08:26:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.040 08:26:17 env -- scripts/common.sh@355 -- # echo 1 00:05:43.040 08:26:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.040 08:26:17 env -- scripts/common.sh@366 -- # decimal 2 00:05:43.040 08:26:17 env -- scripts/common.sh@353 -- # local d=2 00:05:43.040 08:26:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.040 08:26:17 env -- scripts/common.sh@355 -- # echo 2 00:05:43.040 08:26:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.040 08:26:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.040 08:26:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.040 08:26:17 env -- scripts/common.sh@368 -- # return 0 00:05:43.040 08:26:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.040 08:26:18 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.040 --rc genhtml_branch_coverage=1 00:05:43.040 --rc genhtml_function_coverage=1 00:05:43.040 --rc genhtml_legend=1 00:05:43.040 --rc geninfo_all_blocks=1 00:05:43.040 --rc geninfo_unexecuted_blocks=1 00:05:43.040 00:05:43.040 ' 00:05:43.040 08:26:18 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.040 --rc genhtml_branch_coverage=1 00:05:43.040 --rc genhtml_function_coverage=1 00:05:43.040 --rc genhtml_legend=1 00:05:43.040 --rc geninfo_all_blocks=1 00:05:43.040 --rc geninfo_unexecuted_blocks=1 00:05:43.040 00:05:43.040 ' 00:05:43.040 08:26:18 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.040 --rc genhtml_branch_coverage=1 00:05:43.040 --rc genhtml_function_coverage=1 00:05:43.041 --rc genhtml_legend=1 00:05:43.041 --rc geninfo_all_blocks=1 00:05:43.041 --rc geninfo_unexecuted_blocks=1 00:05:43.041 00:05:43.041 ' 00:05:43.041 08:26:18 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.041 --rc genhtml_branch_coverage=1 00:05:43.041 --rc genhtml_function_coverage=1 00:05:43.041 --rc genhtml_legend=1 00:05:43.041 --rc geninfo_all_blocks=1 00:05:43.041 --rc geninfo_unexecuted_blocks=1 00:05:43.041 00:05:43.041 ' 00:05:43.041 08:26:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.041 08:26:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.041 08:26:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.041 08:26:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.041 ************************************ 00:05:43.041 START TEST env_memory 00:05:43.041 ************************************ 00:05:43.041 08:26:18 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.041 00:05:43.041 00:05:43.041 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.041 http://cunit.sourceforge.net/ 00:05:43.041 00:05:43.041 00:05:43.041 Suite: memory 00:05:43.041 Test: alloc and free memory map ...[2024-11-22 08:26:18.086087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.300 passed 00:05:43.300 Test: mem map translation ...[2024-11-22 08:26:18.130930] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.300 [2024-11-22 08:26:18.131100] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.300 [2024-11-22 08:26:18.131278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.300 [2024-11-22 08:26:18.131344] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.300 passed 00:05:43.300 Test: mem map registration ...[2024-11-22 08:26:18.199214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:43.300 [2024-11-22 08:26:18.199361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:43.300 passed 00:05:43.300 Test: mem map adjacent registrations ...passed 00:05:43.300 00:05:43.300 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.300 suites 1 1 n/a 0 0 00:05:43.300 tests 4 4 4 0 0 00:05:43.300 asserts 152 152 152 0 n/a 00:05:43.300 00:05:43.300 Elapsed time = 0.241 seconds 00:05:43.300 00:05:43.300 real 0m0.298s 00:05:43.300 user 0m0.257s 00:05:43.300 sys 0m0.028s 00:05:43.300 08:26:18 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.300 08:26:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.300 ************************************ 00:05:43.300 END TEST env_memory 00:05:43.300 ************************************ 00:05:43.300 08:26:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.300 08:26:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.300 08:26:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.300 08:26:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.560 ************************************ 00:05:43.560 START TEST env_vtophys 00:05:43.560 ************************************ 00:05:43.560 08:26:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.560 EAL: lib.eal log level changed from notice to debug 00:05:43.560 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.560 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.560 EAL: Maximum logical cores by configuration: 128 00:05:43.560 EAL: Detected CPU lcores: 10 00:05:43.560 EAL: Detected NUMA nodes: 1 00:05:43.560 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:43.560 EAL: Detected shared linkage of DPDK 00:05:43.560 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.560 EAL: Selected IOVA mode 'PA' 00:05:43.560 EAL: Probing VFIO support... 00:05:43.560 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.560 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.560 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.560 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.560 EAL: Setting up physically contiguous memory... 00:05:43.560 EAL: Setting maximum number of open files to 524288 00:05:43.560 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.560 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.560 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.560 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.560 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.560 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.560 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.560 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.560 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.560 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.560 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.560 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.560 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.560 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.560 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.560 EAL: Hugepages will be freed exactly as allocated. 00:05:43.560 EAL: No shared files mode enabled, IPC is disabled 00:05:43.560 EAL: No shared files mode enabled, IPC is disabled 00:05:43.560 EAL: TSC frequency is ~2490000 KHz 00:05:43.560 EAL: Main lcore 0 is ready (tid=7fbe1e72da40;cpuset=[0]) 00:05:43.560 EAL: Trying to obtain current memory policy. 00:05:43.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.560 EAL: Restoring previous memory policy: 0 00:05:43.560 EAL: request: mp_malloc_sync 00:05:43.560 EAL: No shared files mode enabled, IPC is disabled 00:05:43.560 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.561 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.561 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.561 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.561 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:43.561 00:05:43.561 00:05:43.561 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.561 http://cunit.sourceforge.net/ 00:05:43.561 00:05:43.561 00:05:43.561 Suite: components_suite 00:05:44.130 Test: vtophys_malloc_test ...passed 00:05:44.130 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.130 EAL: Restoring previous memory policy: 4 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.130 EAL: Trying to obtain current memory policy. 00:05:44.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.130 EAL: Restoring previous memory policy: 4 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.130 EAL: Trying to obtain current memory policy. 00:05:44.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.130 EAL: Restoring previous memory policy: 4 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.130 EAL: Trying to obtain current memory policy. 00:05:44.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.130 EAL: Restoring previous memory policy: 4 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.130 EAL: Trying to obtain current memory policy. 00:05:44.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.130 EAL: Restoring previous memory policy: 4 00:05:44.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.130 EAL: request: mp_malloc_sync 00:05:44.130 EAL: No shared files mode enabled, IPC is disabled 00:05:44.130 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.389 EAL: request: mp_malloc_sync 00:05:44.389 EAL: No shared files mode enabled, IPC is disabled 00:05:44.389 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.389 EAL: Trying to obtain current memory policy. 00:05:44.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.389 EAL: Restoring previous memory policy: 4 00:05:44.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.389 EAL: request: mp_malloc_sync 00:05:44.389 EAL: No shared files mode enabled, IPC is disabled 00:05:44.389 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.389 EAL: request: mp_malloc_sync 00:05:44.389 EAL: No shared files mode enabled, IPC is disabled 00:05:44.389 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.648 EAL: Trying to obtain current memory policy. 00:05:44.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.648 EAL: Restoring previous memory policy: 4 00:05:44.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.648 EAL: request: mp_malloc_sync 00:05:44.648 EAL: No shared files mode enabled, IPC is disabled 00:05:44.648 EAL: Heap on socket 0 was expanded by 130MB 00:05:44.908 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.908 EAL: request: mp_malloc_sync 00:05:44.908 EAL: No shared files mode enabled, IPC is disabled 00:05:44.908 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.168 EAL: Trying to obtain current memory policy. 00:05:45.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.168 EAL: Restoring previous memory policy: 4 00:05:45.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.168 EAL: request: mp_malloc_sync 00:05:45.168 EAL: No shared files mode enabled, IPC is disabled 00:05:45.168 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.736 EAL: request: mp_malloc_sync 00:05:45.736 EAL: No shared files mode enabled, IPC is disabled 00:05:45.736 EAL: Heap on socket 0 was shrunk by 258MB 00:05:45.997 EAL: Trying to obtain current memory policy. 00:05:45.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.256 EAL: Restoring previous memory policy: 4 00:05:46.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.256 EAL: request: mp_malloc_sync 00:05:46.256 EAL: No shared files mode enabled, IPC is disabled 00:05:46.256 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.226 EAL: request: mp_malloc_sync 00:05:47.226 EAL: No shared files mode enabled, IPC is disabled 00:05:47.226 EAL: Heap on socket 0 was shrunk by 514MB 00:05:47.856 EAL: Trying to obtain current memory policy. 00:05:47.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.115 EAL: Restoring previous memory policy: 4 00:05:48.115 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.115 EAL: request: mp_malloc_sync 00:05:48.115 EAL: No shared files mode enabled, IPC is disabled 00:05:48.115 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.024 EAL: request: mp_malloc_sync 00:05:50.024 EAL: No shared files mode enabled, IPC is disabled 00:05:50.024 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.933 passed 00:05:51.933 00:05:51.933 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.933 suites 1 1 n/a 0 0 00:05:51.933 tests 2 2 2 0 0 00:05:51.933 asserts 5782 5782 5782 0 n/a 00:05:51.933 00:05:51.933 Elapsed time = 8.103 seconds 00:05:51.933 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.933 EAL: request: mp_malloc_sync 00:05:51.933 EAL: No shared files mode enabled, IPC is disabled 00:05:51.933 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.933 EAL: No shared files mode enabled, IPC is disabled 00:05:51.933 EAL: No shared files mode enabled, IPC is disabled 00:05:51.933 EAL: No shared files mode enabled, IPC is disabled 00:05:51.933 00:05:51.933 real 0m8.449s 00:05:51.933 user 0m7.439s 00:05:51.933 sys 0m0.849s 00:05:51.933 ************************************ 00:05:51.933 END TEST env_vtophys 00:05:51.933 ************************************ 00:05:51.933 08:26:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.933 08:26:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:51.933 08:26:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.933 08:26:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.933 08:26:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.933 08:26:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.933 ************************************ 00:05:51.934 START TEST env_pci 00:05:51.934 ************************************ 00:05:51.934 08:26:26 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.934 00:05:51.934 00:05:51.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.934 http://cunit.sourceforge.net/ 00:05:51.934 00:05:51.934 00:05:51.934 Suite: pci 00:05:51.934 Test: pci_hook ...[2024-11-22 08:26:26.961762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57624 has claimed it 00:05:51.934 passed 00:05:51.934 00:05:51.934 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.934 suites 1 1 n/a 0 0 00:05:51.934 tests 1 1 1 0 0 00:05:51.934 asserts 25 25 25 0 n/a 00:05:51.934 00:05:51.934 Elapsed time = 0.006 seconds 00:05:51.934 EAL: Cannot find device (10000:00:01.0) 00:05:51.934 EAL: Failed to attach device on primary process 00:05:52.193 ************************************ 00:05:52.193 END TEST env_pci 00:05:52.193 ************************************ 00:05:52.193 00:05:52.193 real 0m0.110s 00:05:52.193 user 0m0.041s 00:05:52.193 sys 0m0.068s 00:05:52.193 08:26:27 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.194 08:26:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.194 08:26:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.194 08:26:27 env -- env/env.sh@15 -- # uname 00:05:52.194 08:26:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.194 08:26:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.194 08:26:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.194 08:26:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.194 08:26:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.194 08:26:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.194 ************************************ 00:05:52.194 START TEST env_dpdk_post_init 00:05:52.194 ************************************ 00:05:52.194 08:26:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.194 EAL: Detected CPU lcores: 10 00:05:52.194 EAL: Detected NUMA nodes: 1 00:05:52.194 EAL: Detected shared linkage of DPDK 00:05:52.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.194 EAL: Selected IOVA mode 'PA' 00:05:52.453 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:52.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:52.453 Starting DPDK initialization... 00:05:52.453 Starting SPDK post initialization... 00:05:52.453 SPDK NVMe probe 00:05:52.453 Attaching to 0000:00:10.0 00:05:52.454 Attaching to 0000:00:11.0 00:05:52.454 Attaching to 0000:00:12.0 00:05:52.454 Attaching to 0000:00:13.0 00:05:52.454 Attached to 0000:00:10.0 00:05:52.454 Attached to 0000:00:11.0 00:05:52.454 Attached to 0000:00:13.0 00:05:52.454 Attached to 0000:00:12.0 00:05:52.454 Cleaning up... 00:05:52.454 ************************************ 00:05:52.454 END TEST env_dpdk_post_init 00:05:52.454 ************************************ 00:05:52.454 00:05:52.454 real 0m0.319s 00:05:52.454 user 0m0.112s 00:05:52.454 sys 0m0.109s 00:05:52.454 08:26:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.454 08:26:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.454 08:26:27 env -- env/env.sh@26 -- # uname 00:05:52.454 08:26:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.454 08:26:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.454 08:26:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.454 08:26:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.454 08:26:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.454 ************************************ 00:05:52.454 START TEST env_mem_callbacks 00:05:52.454 ************************************ 00:05:52.454 08:26:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.713 EAL: Detected CPU lcores: 10 00:05:52.713 EAL: Detected NUMA nodes: 1 00:05:52.713 EAL: Detected shared linkage of DPDK 00:05:52.713 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.713 EAL: Selected IOVA mode 'PA' 00:05:52.713 00:05:52.713 00:05:52.713 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.713 http://cunit.sourceforge.net/ 00:05:52.713 00:05:52.713 00:05:52.713 Suite: memory 00:05:52.713 Test: test ... 00:05:52.713 register 0x200000200000 2097152 00:05:52.713 malloc 3145728 00:05:52.713 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.713 register 0x200000400000 4194304 00:05:52.713 buf 0x2000004fffc0 len 3145728 PASSED 00:05:52.713 malloc 64 00:05:52.713 buf 0x2000004ffec0 len 64 PASSED 00:05:52.713 malloc 4194304 00:05:52.713 register 0x200000800000 6291456 00:05:52.713 buf 0x2000009fffc0 len 4194304 PASSED 00:05:52.713 free 0x2000004fffc0 3145728 00:05:52.713 free 0x2000004ffec0 64 00:05:52.713 unregister 0x200000400000 4194304 PASSED 00:05:52.713 free 0x2000009fffc0 4194304 00:05:52.713 unregister 0x200000800000 6291456 PASSED 00:05:52.713 malloc 8388608 00:05:52.713 register 0x200000400000 10485760 00:05:52.713 buf 0x2000005fffc0 len 8388608 PASSED 00:05:52.713 free 0x2000005fffc0 8388608 00:05:52.713 unregister 0x200000400000 10485760 PASSED 00:05:52.713 passed 00:05:52.713 00:05:52.713 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.713 suites 1 1 n/a 0 0 00:05:52.713 tests 1 1 1 0 0 00:05:52.713 asserts 15 15 15 0 n/a 00:05:52.713 00:05:52.713 Elapsed time = 0.080 seconds 00:05:52.973 00:05:52.973 real 0m0.294s 00:05:52.973 user 0m0.103s 00:05:52.973 sys 0m0.089s 00:05:52.973 08:26:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.973 ************************************ 00:05:52.973 END TEST env_mem_callbacks 00:05:52.973 ************************************ 00:05:52.974 08:26:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.974 ************************************ 00:05:52.974 END TEST env 00:05:52.974 ************************************ 00:05:52.974 00:05:52.974 real 0m10.099s 00:05:52.974 user 0m8.231s 00:05:52.974 sys 0m1.489s 00:05:52.974 08:26:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.974 08:26:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.974 08:26:27 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.974 08:26:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.974 08:26:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.974 08:26:27 -- common/autotest_common.sh@10 -- # set +x 00:05:52.974 ************************************ 00:05:52.974 START TEST rpc 00:05:52.974 ************************************ 00:05:52.974 08:26:27 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.974 * Looking for test storage... 00:05:53.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.234 08:26:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.234 08:26:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.234 08:26:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.234 08:26:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.234 08:26:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.234 08:26:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.234 08:26:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.234 08:26:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.234 08:26:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.234 08:26:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.234 08:26:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.234 08:26:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.234 08:26:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.234 08:26:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.234 08:26:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.234 --rc genhtml_branch_coverage=1 00:05:53.234 --rc genhtml_function_coverage=1 00:05:53.234 --rc genhtml_legend=1 00:05:53.234 --rc geninfo_all_blocks=1 00:05:53.234 --rc geninfo_unexecuted_blocks=1 00:05:53.234 00:05:53.234 ' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.234 --rc genhtml_branch_coverage=1 00:05:53.234 --rc genhtml_function_coverage=1 00:05:53.234 --rc genhtml_legend=1 00:05:53.234 --rc geninfo_all_blocks=1 00:05:53.234 --rc geninfo_unexecuted_blocks=1 00:05:53.234 00:05:53.234 ' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.234 --rc genhtml_branch_coverage=1 00:05:53.234 --rc genhtml_function_coverage=1 00:05:53.234 --rc genhtml_legend=1 00:05:53.234 --rc geninfo_all_blocks=1 00:05:53.234 --rc geninfo_unexecuted_blocks=1 00:05:53.234 00:05:53.234 ' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.234 --rc genhtml_branch_coverage=1 00:05:53.234 --rc genhtml_function_coverage=1 00:05:53.234 --rc genhtml_legend=1 00:05:53.234 --rc geninfo_all_blocks=1 00:05:53.234 --rc geninfo_unexecuted_blocks=1 00:05:53.234 00:05:53.234 ' 00:05:53.234 08:26:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57756 00:05:53.234 08:26:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.234 08:26:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.234 08:26:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57756 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 57756 ']' 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.234 08:26:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.234 [2024-11-22 08:26:28.302228] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:05:53.234 [2024-11-22 08:26:28.302358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57756 ] 00:05:53.494 [2024-11-22 08:26:28.482881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.753 [2024-11-22 08:26:28.592747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.753 [2024-11-22 08:26:28.593033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57756' to capture a snapshot of events at runtime. 00:05:53.753 [2024-11-22 08:26:28.593057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.753 [2024-11-22 08:26:28.593071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.753 [2024-11-22 08:26:28.593081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57756 for offline analysis/debug. 00:05:53.753 [2024-11-22 08:26:28.594174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.691 08:26:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.691 08:26:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.691 08:26:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.691 08:26:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.691 08:26:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.691 08:26:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.691 08:26:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.691 08:26:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.691 08:26:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 ************************************ 00:05:54.691 START TEST rpc_integrity 00:05:54.691 ************************************ 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.691 { 00:05:54.691 "name": "Malloc0", 00:05:54.691 "aliases": [ 00:05:54.691 "4fda6cba-8197-4e35-a451-a9c67f5d7c19" 00:05:54.691 ], 00:05:54.691 "product_name": "Malloc disk", 00:05:54.691 "block_size": 512, 00:05:54.691 "num_blocks": 16384, 00:05:54.691 "uuid": "4fda6cba-8197-4e35-a451-a9c67f5d7c19", 00:05:54.691 "assigned_rate_limits": { 00:05:54.691 "rw_ios_per_sec": 0, 00:05:54.691 "rw_mbytes_per_sec": 0, 00:05:54.691 "r_mbytes_per_sec": 0, 00:05:54.691 "w_mbytes_per_sec": 0 00:05:54.691 }, 00:05:54.691 "claimed": false, 00:05:54.691 "zoned": false, 00:05:54.691 "supported_io_types": { 00:05:54.691 "read": true, 00:05:54.691 "write": true, 00:05:54.691 "unmap": true, 00:05:54.691 "flush": true, 00:05:54.691 "reset": true, 00:05:54.691 "nvme_admin": false, 00:05:54.691 "nvme_io": false, 00:05:54.691 "nvme_io_md": false, 00:05:54.691 "write_zeroes": true, 00:05:54.691 "zcopy": true, 00:05:54.691 "get_zone_info": false, 00:05:54.691 "zone_management": false, 00:05:54.691 "zone_append": false, 00:05:54.691 "compare": false, 00:05:54.691 "compare_and_write": false, 00:05:54.691 "abort": true, 00:05:54.691 "seek_hole": false, 00:05:54.691 "seek_data": false, 00:05:54.691 "copy": true, 00:05:54.691 "nvme_iov_md": false 00:05:54.691 }, 00:05:54.691 "memory_domains": [ 00:05:54.691 { 00:05:54.691 "dma_device_id": "system", 00:05:54.691 "dma_device_type": 1 00:05:54.691 }, 00:05:54.691 { 00:05:54.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.691 "dma_device_type": 2 00:05:54.691 } 00:05:54.691 ], 00:05:54.691 "driver_specific": {} 00:05:54.691 } 00:05:54.691 ]' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 [2024-11-22 08:26:29.608830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.691 [2024-11-22 08:26:29.608890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.691 [2024-11-22 08:26:29.608919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:54.691 [2024-11-22 08:26:29.608933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.691 [2024-11-22 08:26:29.611383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.691 [2024-11-22 08:26:29.611540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.691 Passthru0 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.691 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.691 { 00:05:54.691 "name": "Malloc0", 00:05:54.691 "aliases": [ 00:05:54.691 "4fda6cba-8197-4e35-a451-a9c67f5d7c19" 00:05:54.691 ], 00:05:54.691 "product_name": "Malloc disk", 00:05:54.691 "block_size": 512, 00:05:54.691 "num_blocks": 16384, 00:05:54.691 "uuid": "4fda6cba-8197-4e35-a451-a9c67f5d7c19", 00:05:54.691 "assigned_rate_limits": { 00:05:54.691 "rw_ios_per_sec": 0, 00:05:54.691 "rw_mbytes_per_sec": 0, 00:05:54.691 "r_mbytes_per_sec": 0, 00:05:54.691 "w_mbytes_per_sec": 0 00:05:54.691 }, 00:05:54.691 "claimed": true, 00:05:54.691 "claim_type": "exclusive_write", 00:05:54.691 "zoned": false, 00:05:54.691 "supported_io_types": { 00:05:54.691 "read": true, 00:05:54.691 "write": true, 00:05:54.691 "unmap": true, 00:05:54.691 "flush": true, 00:05:54.691 "reset": true, 00:05:54.691 "nvme_admin": false, 00:05:54.691 "nvme_io": false, 00:05:54.691 "nvme_io_md": false, 00:05:54.691 "write_zeroes": true, 00:05:54.691 "zcopy": true, 00:05:54.691 "get_zone_info": false, 00:05:54.691 "zone_management": false, 00:05:54.691 "zone_append": false, 00:05:54.691 "compare": false, 00:05:54.691 "compare_and_write": false, 00:05:54.691 "abort": true, 00:05:54.691 "seek_hole": false, 00:05:54.691 "seek_data": false, 00:05:54.691 "copy": true, 00:05:54.691 "nvme_iov_md": false 00:05:54.691 }, 00:05:54.691 "memory_domains": [ 00:05:54.691 { 00:05:54.691 "dma_device_id": "system", 00:05:54.691 "dma_device_type": 1 00:05:54.691 }, 00:05:54.691 { 00:05:54.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.691 "dma_device_type": 2 00:05:54.691 } 00:05:54.691 ], 00:05:54.691 "driver_specific": {} 00:05:54.691 }, 00:05:54.691 { 00:05:54.691 "name": "Passthru0", 00:05:54.691 "aliases": [ 00:05:54.691 "4065556f-1c84-5162-b127-185acb8bab36" 00:05:54.691 ], 00:05:54.691 "product_name": "passthru", 00:05:54.691 "block_size": 512, 00:05:54.691 "num_blocks": 16384, 00:05:54.691 "uuid": "4065556f-1c84-5162-b127-185acb8bab36", 00:05:54.691 "assigned_rate_limits": { 00:05:54.691 "rw_ios_per_sec": 0, 00:05:54.691 "rw_mbytes_per_sec": 0, 00:05:54.691 "r_mbytes_per_sec": 0, 00:05:54.691 "w_mbytes_per_sec": 0 00:05:54.691 }, 00:05:54.691 "claimed": false, 00:05:54.691 "zoned": false, 00:05:54.691 "supported_io_types": { 00:05:54.691 "read": true, 00:05:54.691 "write": true, 00:05:54.691 "unmap": true, 00:05:54.691 "flush": true, 00:05:54.691 "reset": true, 00:05:54.691 "nvme_admin": false, 00:05:54.691 "nvme_io": false, 00:05:54.691 "nvme_io_md": false, 00:05:54.691 "write_zeroes": true, 00:05:54.691 "zcopy": true, 00:05:54.691 "get_zone_info": false, 00:05:54.691 "zone_management": false, 00:05:54.691 "zone_append": false, 00:05:54.691 "compare": false, 00:05:54.691 "compare_and_write": false, 00:05:54.691 "abort": true, 00:05:54.691 "seek_hole": false, 00:05:54.691 "seek_data": false, 00:05:54.691 "copy": true, 00:05:54.691 "nvme_iov_md": false 00:05:54.691 }, 00:05:54.691 "memory_domains": [ 00:05:54.691 { 00:05:54.691 "dma_device_id": "system", 00:05:54.691 "dma_device_type": 1 00:05:54.691 }, 00:05:54.691 { 00:05:54.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.691 "dma_device_type": 2 00:05:54.691 } 00:05:54.691 ], 00:05:54.691 "driver_specific": { 00:05:54.691 "passthru": { 00:05:54.691 "name": "Passthru0", 00:05:54.691 "base_bdev_name": "Malloc0" 00:05:54.691 } 00:05:54.691 } 00:05:54.691 } 00:05:54.691 ]' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.691 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.692 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.951 ************************************ 00:05:54.951 END TEST rpc_integrity 00:05:54.951 ************************************ 00:05:54.951 08:26:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.951 00:05:54.951 real 0m0.346s 00:05:54.951 user 0m0.182s 00:05:54.951 sys 0m0.065s 00:05:54.951 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.951 08:26:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.951 08:26:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.951 08:26:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.951 08:26:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.951 08:26:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.951 ************************************ 00:05:54.951 START TEST rpc_plugins 00:05:54.951 ************************************ 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.951 { 00:05:54.951 "name": "Malloc1", 00:05:54.951 "aliases": [ 00:05:54.951 "1b58bdbe-a6eb-40f9-a8bf-420643929002" 00:05:54.951 ], 00:05:54.951 "product_name": "Malloc disk", 00:05:54.951 "block_size": 4096, 00:05:54.951 "num_blocks": 256, 00:05:54.951 "uuid": "1b58bdbe-a6eb-40f9-a8bf-420643929002", 00:05:54.951 "assigned_rate_limits": { 00:05:54.951 "rw_ios_per_sec": 0, 00:05:54.951 "rw_mbytes_per_sec": 0, 00:05:54.951 "r_mbytes_per_sec": 0, 00:05:54.951 "w_mbytes_per_sec": 0 00:05:54.951 }, 00:05:54.951 "claimed": false, 00:05:54.951 "zoned": false, 00:05:54.951 "supported_io_types": { 00:05:54.951 "read": true, 00:05:54.951 "write": true, 00:05:54.951 "unmap": true, 00:05:54.951 "flush": true, 00:05:54.951 "reset": true, 00:05:54.951 "nvme_admin": false, 00:05:54.951 "nvme_io": false, 00:05:54.951 "nvme_io_md": false, 00:05:54.951 "write_zeroes": true, 00:05:54.951 "zcopy": true, 00:05:54.951 "get_zone_info": false, 00:05:54.951 "zone_management": false, 00:05:54.951 "zone_append": false, 00:05:54.951 "compare": false, 00:05:54.951 "compare_and_write": false, 00:05:54.951 "abort": true, 00:05:54.951 "seek_hole": false, 00:05:54.951 "seek_data": false, 00:05:54.951 "copy": true, 00:05:54.951 "nvme_iov_md": false 00:05:54.951 }, 00:05:54.951 "memory_domains": [ 00:05:54.951 { 00:05:54.951 "dma_device_id": "system", 00:05:54.951 "dma_device_type": 1 00:05:54.951 }, 00:05:54.951 { 00:05:54.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.951 "dma_device_type": 2 00:05:54.951 } 00:05:54.951 ], 00:05:54.951 "driver_specific": {} 00:05:54.951 } 00:05:54.951 ]' 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.951 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.951 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.952 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.952 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.952 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.952 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.952 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.952 08:26:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.952 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.952 08:26:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.211 ************************************ 00:05:55.211 END TEST rpc_plugins 00:05:55.211 ************************************ 00:05:55.211 08:26:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.211 00:05:55.211 real 0m0.173s 00:05:55.211 user 0m0.095s 00:05:55.211 sys 0m0.033s 00:05:55.211 08:26:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.211 08:26:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 08:26:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.211 08:26:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.211 08:26:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.211 08:26:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 ************************************ 00:05:55.211 START TEST rpc_trace_cmd_test 00:05:55.211 ************************************ 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:55.211 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57756", 00:05:55.211 "tpoint_group_mask": "0x8", 00:05:55.211 "iscsi_conn": { 00:05:55.211 "mask": "0x2", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "scsi": { 00:05:55.211 "mask": "0x4", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "bdev": { 00:05:55.211 "mask": "0x8", 00:05:55.211 "tpoint_mask": "0xffffffffffffffff" 00:05:55.211 }, 00:05:55.211 "nvmf_rdma": { 00:05:55.211 "mask": "0x10", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "nvmf_tcp": { 00:05:55.211 "mask": "0x20", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "ftl": { 00:05:55.211 "mask": "0x40", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "blobfs": { 00:05:55.211 "mask": "0x80", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "dsa": { 00:05:55.211 "mask": "0x200", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "thread": { 00:05:55.211 "mask": "0x400", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "nvme_pcie": { 00:05:55.211 "mask": "0x800", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "iaa": { 00:05:55.211 "mask": "0x1000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "nvme_tcp": { 00:05:55.211 "mask": "0x2000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "bdev_nvme": { 00:05:55.211 "mask": "0x4000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "sock": { 00:05:55.211 "mask": "0x8000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "blob": { 00:05:55.211 "mask": "0x10000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "bdev_raid": { 00:05:55.211 "mask": "0x20000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 }, 00:05:55.211 "scheduler": { 00:05:55.211 "mask": "0x40000", 00:05:55.211 "tpoint_mask": "0x0" 00:05:55.211 } 00:05:55.211 }' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.211 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.471 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.471 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.471 ************************************ 00:05:55.471 END TEST rpc_trace_cmd_test 00:05:55.471 ************************************ 00:05:55.471 08:26:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.471 00:05:55.471 real 0m0.231s 00:05:55.471 user 0m0.181s 00:05:55.471 sys 0m0.039s 00:05:55.471 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.471 08:26:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.471 08:26:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.471 08:26:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.471 08:26:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.471 08:26:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.471 08:26:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.471 08:26:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.471 ************************************ 00:05:55.471 START TEST rpc_daemon_integrity 00:05:55.471 ************************************ 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.471 { 00:05:55.471 "name": "Malloc2", 00:05:55.471 "aliases": [ 00:05:55.471 "2698f5f6-2e0e-491f-af5d-477fe2047504" 00:05:55.471 ], 00:05:55.471 "product_name": "Malloc disk", 00:05:55.471 "block_size": 512, 00:05:55.471 "num_blocks": 16384, 00:05:55.471 "uuid": "2698f5f6-2e0e-491f-af5d-477fe2047504", 00:05:55.471 "assigned_rate_limits": { 00:05:55.471 "rw_ios_per_sec": 0, 00:05:55.471 "rw_mbytes_per_sec": 0, 00:05:55.471 "r_mbytes_per_sec": 0, 00:05:55.471 "w_mbytes_per_sec": 0 00:05:55.471 }, 00:05:55.471 "claimed": false, 00:05:55.471 "zoned": false, 00:05:55.471 "supported_io_types": { 00:05:55.471 "read": true, 00:05:55.471 "write": true, 00:05:55.471 "unmap": true, 00:05:55.471 "flush": true, 00:05:55.471 "reset": true, 00:05:55.471 "nvme_admin": false, 00:05:55.471 "nvme_io": false, 00:05:55.471 "nvme_io_md": false, 00:05:55.471 "write_zeroes": true, 00:05:55.471 "zcopy": true, 00:05:55.471 "get_zone_info": false, 00:05:55.471 "zone_management": false, 00:05:55.471 "zone_append": false, 00:05:55.471 "compare": false, 00:05:55.471 "compare_and_write": false, 00:05:55.471 "abort": true, 00:05:55.471 "seek_hole": false, 00:05:55.471 "seek_data": false, 00:05:55.471 "copy": true, 00:05:55.471 "nvme_iov_md": false 00:05:55.471 }, 00:05:55.471 "memory_domains": [ 00:05:55.471 { 00:05:55.471 "dma_device_id": "system", 00:05:55.471 "dma_device_type": 1 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.471 "dma_device_type": 2 00:05:55.471 } 00:05:55.471 ], 00:05:55.471 "driver_specific": {} 00:05:55.471 } 00:05:55.471 ]' 00:05:55.471 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 [2024-11-22 08:26:30.560140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.731 [2024-11-22 08:26:30.560316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.731 [2024-11-22 08:26:30.560344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:55.731 [2024-11-22 08:26:30.560359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.731 [2024-11-22 08:26:30.562735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.731 [2024-11-22 08:26:30.562781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.731 Passthru0 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.731 { 00:05:55.731 "name": "Malloc2", 00:05:55.731 "aliases": [ 00:05:55.731 "2698f5f6-2e0e-491f-af5d-477fe2047504" 00:05:55.731 ], 00:05:55.731 "product_name": "Malloc disk", 00:05:55.731 "block_size": 512, 00:05:55.731 "num_blocks": 16384, 00:05:55.731 "uuid": "2698f5f6-2e0e-491f-af5d-477fe2047504", 00:05:55.731 "assigned_rate_limits": { 00:05:55.731 "rw_ios_per_sec": 0, 00:05:55.731 "rw_mbytes_per_sec": 0, 00:05:55.731 "r_mbytes_per_sec": 0, 00:05:55.731 "w_mbytes_per_sec": 0 00:05:55.731 }, 00:05:55.731 "claimed": true, 00:05:55.731 "claim_type": "exclusive_write", 00:05:55.731 "zoned": false, 00:05:55.731 "supported_io_types": { 00:05:55.731 "read": true, 00:05:55.731 "write": true, 00:05:55.731 "unmap": true, 00:05:55.731 "flush": true, 00:05:55.731 "reset": true, 00:05:55.731 "nvme_admin": false, 00:05:55.731 "nvme_io": false, 00:05:55.731 "nvme_io_md": false, 00:05:55.731 "write_zeroes": true, 00:05:55.731 "zcopy": true, 00:05:55.731 "get_zone_info": false, 00:05:55.731 "zone_management": false, 00:05:55.731 "zone_append": false, 00:05:55.731 "compare": false, 00:05:55.731 "compare_and_write": false, 00:05:55.731 "abort": true, 00:05:55.731 "seek_hole": false, 00:05:55.731 "seek_data": false, 00:05:55.731 "copy": true, 00:05:55.731 "nvme_iov_md": false 00:05:55.731 }, 00:05:55.731 "memory_domains": [ 00:05:55.731 { 00:05:55.731 "dma_device_id": "system", 00:05:55.731 "dma_device_type": 1 00:05:55.731 }, 00:05:55.731 { 00:05:55.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.731 "dma_device_type": 2 00:05:55.731 } 00:05:55.731 ], 00:05:55.731 "driver_specific": {} 00:05:55.731 }, 00:05:55.731 { 00:05:55.731 "name": "Passthru0", 00:05:55.731 "aliases": [ 00:05:55.731 "9396b6da-639b-523e-a95f-0036396b07c6" 00:05:55.731 ], 00:05:55.731 "product_name": "passthru", 00:05:55.731 "block_size": 512, 00:05:55.731 "num_blocks": 16384, 00:05:55.731 "uuid": "9396b6da-639b-523e-a95f-0036396b07c6", 00:05:55.731 "assigned_rate_limits": { 00:05:55.731 "rw_ios_per_sec": 0, 00:05:55.731 "rw_mbytes_per_sec": 0, 00:05:55.731 "r_mbytes_per_sec": 0, 00:05:55.731 "w_mbytes_per_sec": 0 00:05:55.731 }, 00:05:55.731 "claimed": false, 00:05:55.731 "zoned": false, 00:05:55.731 "supported_io_types": { 00:05:55.731 "read": true, 00:05:55.731 "write": true, 00:05:55.731 "unmap": true, 00:05:55.731 "flush": true, 00:05:55.731 "reset": true, 00:05:55.731 "nvme_admin": false, 00:05:55.731 "nvme_io": false, 00:05:55.731 "nvme_io_md": false, 00:05:55.731 "write_zeroes": true, 00:05:55.731 "zcopy": true, 00:05:55.731 "get_zone_info": false, 00:05:55.731 "zone_management": false, 00:05:55.731 "zone_append": false, 00:05:55.731 "compare": false, 00:05:55.731 "compare_and_write": false, 00:05:55.731 "abort": true, 00:05:55.731 "seek_hole": false, 00:05:55.731 "seek_data": false, 00:05:55.731 "copy": true, 00:05:55.731 "nvme_iov_md": false 00:05:55.731 }, 00:05:55.731 "memory_domains": [ 00:05:55.731 { 00:05:55.731 "dma_device_id": "system", 00:05:55.731 "dma_device_type": 1 00:05:55.731 }, 00:05:55.731 { 00:05:55.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.731 "dma_device_type": 2 00:05:55.731 } 00:05:55.731 ], 00:05:55.731 "driver_specific": { 00:05:55.731 "passthru": { 00:05:55.731 "name": "Passthru0", 00:05:55.731 "base_bdev_name": "Malloc2" 00:05:55.731 } 00:05:55.731 } 00:05:55.731 } 00:05:55.731 ]' 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.731 ************************************ 00:05:55.731 END TEST rpc_daemon_integrity 00:05:55.731 ************************************ 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.731 00:05:55.731 real 0m0.328s 00:05:55.731 user 0m0.177s 00:05:55.731 sys 0m0.057s 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.731 08:26:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 08:26:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:55.731 08:26:30 rpc -- rpc/rpc.sh@84 -- # killprocess 57756 00:05:55.731 08:26:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 57756 ']' 00:05:55.731 08:26:30 rpc -- common/autotest_common.sh@958 -- # kill -0 57756 00:05:55.731 08:26:30 rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.731 08:26:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.731 08:26:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57756 00:05:55.990 killing process with pid 57756 00:05:55.990 08:26:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.990 08:26:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.990 08:26:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57756' 00:05:55.990 08:26:30 rpc -- common/autotest_common.sh@973 -- # kill 57756 00:05:55.990 08:26:30 rpc -- common/autotest_common.sh@978 -- # wait 57756 00:05:58.526 00:05:58.526 real 0m5.269s 00:05:58.526 user 0m5.724s 00:05:58.526 sys 0m0.991s 00:05:58.526 ************************************ 00:05:58.526 END TEST rpc 00:05:58.526 ************************************ 00:05:58.526 08:26:33 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.526 08:26:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.526 08:26:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:58.526 08:26:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.526 08:26:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.526 08:26:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.526 ************************************ 00:05:58.526 START TEST skip_rpc 00:05:58.526 ************************************ 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:58.526 * Looking for test storage... 00:05:58.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.526 08:26:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.526 --rc genhtml_branch_coverage=1 00:05:58.526 --rc genhtml_function_coverage=1 00:05:58.526 --rc genhtml_legend=1 00:05:58.526 --rc geninfo_all_blocks=1 00:05:58.526 --rc geninfo_unexecuted_blocks=1 00:05:58.526 00:05:58.526 ' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.526 --rc genhtml_branch_coverage=1 00:05:58.526 --rc genhtml_function_coverage=1 00:05:58.526 --rc genhtml_legend=1 00:05:58.526 --rc geninfo_all_blocks=1 00:05:58.526 --rc geninfo_unexecuted_blocks=1 00:05:58.526 00:05:58.526 ' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.526 --rc genhtml_branch_coverage=1 00:05:58.526 --rc genhtml_function_coverage=1 00:05:58.526 --rc genhtml_legend=1 00:05:58.526 --rc geninfo_all_blocks=1 00:05:58.526 --rc geninfo_unexecuted_blocks=1 00:05:58.526 00:05:58.526 ' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.526 --rc genhtml_branch_coverage=1 00:05:58.526 --rc genhtml_function_coverage=1 00:05:58.526 --rc genhtml_legend=1 00:05:58.526 --rc geninfo_all_blocks=1 00:05:58.526 --rc geninfo_unexecuted_blocks=1 00:05:58.526 00:05:58.526 ' 00:05:58.526 08:26:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.526 08:26:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:58.526 08:26:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.526 08:26:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.526 ************************************ 00:05:58.526 START TEST skip_rpc 00:05:58.526 ************************************ 00:05:58.526 08:26:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:58.526 08:26:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57985 00:05:58.526 08:26:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.526 08:26:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.526 08:26:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.786 [2024-11-22 08:26:33.615700] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:05:58.786 [2024-11-22 08:26:33.615829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57985 ] 00:05:58.786 [2024-11-22 08:26:33.795324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.045 [2024-11-22 08:26:33.911222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57985 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57985 ']' 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57985 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57985 00:06:04.349 killing process with pid 57985 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57985' 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57985 00:06:04.349 08:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57985 00:06:06.257 00:06:06.257 real 0m7.448s 00:06:06.257 user 0m6.962s 00:06:06.257 sys 0m0.411s 00:06:06.257 08:26:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.257 ************************************ 00:06:06.257 END TEST skip_rpc 00:06:06.257 ************************************ 00:06:06.257 08:26:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.257 08:26:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:06.257 08:26:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.257 08:26:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.257 08:26:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.257 ************************************ 00:06:06.257 START TEST skip_rpc_with_json 00:06:06.257 ************************************ 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58095 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58095 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58095 ']' 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.257 08:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.257 [2024-11-22 08:26:41.142880] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:06.257 [2024-11-22 08:26:41.143030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58095 ] 00:06:06.257 [2024-11-22 08:26:41.324773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.517 [2024-11-22 08:26:41.439411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.456 [2024-11-22 08:26:42.271941] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:07.456 request: 00:06:07.456 { 00:06:07.456 "trtype": "tcp", 00:06:07.456 "method": "nvmf_get_transports", 00:06:07.456 "req_id": 1 00:06:07.456 } 00:06:07.456 Got JSON-RPC error response 00:06:07.456 response: 00:06:07.456 { 00:06:07.456 "code": -19, 00:06:07.456 "message": "No such device" 00:06:07.456 } 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.456 [2024-11-22 08:26:42.288045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.456 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.456 { 00:06:07.456 "subsystems": [ 00:06:07.456 { 00:06:07.456 "subsystem": "fsdev", 00:06:07.456 "config": [ 00:06:07.456 { 00:06:07.456 "method": "fsdev_set_opts", 00:06:07.456 "params": { 00:06:07.456 "fsdev_io_pool_size": 65535, 00:06:07.456 "fsdev_io_cache_size": 256 00:06:07.456 } 00:06:07.456 } 00:06:07.456 ] 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "subsystem": "keyring", 00:06:07.456 "config": [] 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "subsystem": "iobuf", 00:06:07.456 "config": [ 00:06:07.456 { 00:06:07.456 "method": "iobuf_set_options", 00:06:07.456 "params": { 00:06:07.456 "small_pool_count": 8192, 00:06:07.456 "large_pool_count": 1024, 00:06:07.456 "small_bufsize": 8192, 00:06:07.456 "large_bufsize": 135168, 00:06:07.456 "enable_numa": false 00:06:07.456 } 00:06:07.456 } 00:06:07.456 ] 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "subsystem": "sock", 00:06:07.456 "config": [ 00:06:07.456 { 00:06:07.456 "method": "sock_set_default_impl", 00:06:07.456 "params": { 00:06:07.456 "impl_name": "posix" 00:06:07.456 } 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "method": "sock_impl_set_options", 00:06:07.456 "params": { 00:06:07.456 "impl_name": "ssl", 00:06:07.456 "recv_buf_size": 4096, 00:06:07.456 "send_buf_size": 4096, 00:06:07.456 "enable_recv_pipe": true, 00:06:07.456 "enable_quickack": false, 00:06:07.456 "enable_placement_id": 0, 00:06:07.456 "enable_zerocopy_send_server": true, 00:06:07.456 "enable_zerocopy_send_client": false, 00:06:07.456 "zerocopy_threshold": 0, 00:06:07.456 "tls_version": 0, 00:06:07.456 "enable_ktls": false 00:06:07.456 } 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "method": "sock_impl_set_options", 00:06:07.456 "params": { 00:06:07.456 "impl_name": "posix", 00:06:07.456 "recv_buf_size": 2097152, 00:06:07.456 "send_buf_size": 2097152, 00:06:07.456 "enable_recv_pipe": true, 00:06:07.456 "enable_quickack": false, 00:06:07.456 "enable_placement_id": 0, 00:06:07.456 "enable_zerocopy_send_server": true, 00:06:07.456 "enable_zerocopy_send_client": false, 00:06:07.456 "zerocopy_threshold": 0, 00:06:07.456 "tls_version": 0, 00:06:07.456 "enable_ktls": false 00:06:07.456 } 00:06:07.456 } 00:06:07.456 ] 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "subsystem": "vmd", 00:06:07.456 "config": [] 00:06:07.456 }, 00:06:07.456 { 00:06:07.456 "subsystem": "accel", 00:06:07.456 "config": [ 00:06:07.456 { 00:06:07.456 "method": "accel_set_options", 00:06:07.456 "params": { 00:06:07.456 "small_cache_size": 128, 00:06:07.456 "large_cache_size": 16, 00:06:07.456 "task_count": 2048, 00:06:07.456 "sequence_count": 2048, 00:06:07.456 "buf_count": 2048 00:06:07.456 } 00:06:07.456 } 00:06:07.456 ] 00:06:07.456 }, 00:06:07.457 { 00:06:07.457 "subsystem": "bdev", 00:06:07.457 "config": [ 00:06:07.457 { 00:06:07.457 "method": "bdev_set_options", 00:06:07.457 "params": { 00:06:07.457 "bdev_io_pool_size": 65535, 00:06:07.457 "bdev_io_cache_size": 256, 00:06:07.457 "bdev_auto_examine": true, 00:06:07.457 "iobuf_small_cache_size": 128, 00:06:07.457 "iobuf_large_cache_size": 16 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "bdev_raid_set_options", 00:06:07.457 "params": { 00:06:07.457 "process_window_size_kb": 1024, 00:06:07.457 "process_max_bandwidth_mb_sec": 0 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "bdev_iscsi_set_options", 00:06:07.457 "params": { 00:06:07.457 "timeout_sec": 30 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "bdev_nvme_set_options", 00:06:07.457 "params": { 00:06:07.457 "action_on_timeout": "none", 00:06:07.457 "timeout_us": 0, 00:06:07.457 "timeout_admin_us": 0, 00:06:07.457 "keep_alive_timeout_ms": 10000, 00:06:07.457 "arbitration_burst": 0, 00:06:07.457 "low_priority_weight": 0, 00:06:07.457 "medium_priority_weight": 0, 00:06:07.457 "high_priority_weight": 0, 00:06:07.457 "nvme_adminq_poll_period_us": 10000, 00:06:07.457 "nvme_ioq_poll_period_us": 0, 00:06:07.457 "io_queue_requests": 0, 00:06:07.457 "delay_cmd_submit": true, 00:06:07.457 "transport_retry_count": 4, 00:06:07.457 "bdev_retry_count": 3, 00:06:07.457 "transport_ack_timeout": 0, 00:06:07.457 "ctrlr_loss_timeout_sec": 0, 00:06:07.457 "reconnect_delay_sec": 0, 00:06:07.457 "fast_io_fail_timeout_sec": 0, 00:06:07.457 "disable_auto_failback": false, 00:06:07.457 "generate_uuids": false, 00:06:07.457 "transport_tos": 0, 00:06:07.457 "nvme_error_stat": false, 00:06:07.457 "rdma_srq_size": 0, 00:06:07.457 "io_path_stat": false, 00:06:07.457 "allow_accel_sequence": false, 00:06:07.457 "rdma_max_cq_size": 0, 00:06:07.457 "rdma_cm_event_timeout_ms": 0, 00:06:07.457 "dhchap_digests": [ 00:06:07.457 "sha256", 00:06:07.457 "sha384", 00:06:07.457 "sha512" 00:06:07.457 ], 00:06:07.457 "dhchap_dhgroups": [ 00:06:07.457 "null", 00:06:07.457 "ffdhe2048", 00:06:07.457 "ffdhe3072", 00:06:07.457 "ffdhe4096", 00:06:07.457 "ffdhe6144", 00:06:07.457 "ffdhe8192" 00:06:07.457 ] 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "bdev_nvme_set_hotplug", 00:06:07.457 "params": { 00:06:07.457 "period_us": 100000, 00:06:07.457 "enable": false 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "bdev_wait_for_examine" 00:06:07.457 } 00:06:07.457 ] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "scsi", 00:06:07.457 "config": null 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "scheduler", 00:06:07.457 "config": [ 00:06:07.457 { 00:06:07.457 "method": "framework_set_scheduler", 00:06:07.457 "params": { 00:06:07.457 "name": "static" 00:06:07.457 } 00:06:07.457 } 00:06:07.457 ] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "vhost_scsi", 00:06:07.457 "config": [] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "vhost_blk", 00:06:07.457 "config": [] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "ublk", 00:06:07.457 "config": [] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "nbd", 00:06:07.457 "config": [] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "nvmf", 00:06:07.457 "config": [ 00:06:07.457 { 00:06:07.457 "method": "nvmf_set_config", 00:06:07.457 "params": { 00:06:07.457 "discovery_filter": "match_any", 00:06:07.457 "admin_cmd_passthru": { 00:06:07.457 "identify_ctrlr": false 00:06:07.457 }, 00:06:07.457 "dhchap_digests": [ 00:06:07.457 "sha256", 00:06:07.457 "sha384", 00:06:07.457 "sha512" 00:06:07.457 ], 00:06:07.457 "dhchap_dhgroups": [ 00:06:07.457 "null", 00:06:07.457 "ffdhe2048", 00:06:07.457 "ffdhe3072", 00:06:07.457 "ffdhe4096", 00:06:07.457 "ffdhe6144", 00:06:07.457 "ffdhe8192" 00:06:07.457 ] 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "nvmf_set_max_subsystems", 00:06:07.457 "params": { 00:06:07.457 "max_subsystems": 1024 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "nvmf_set_crdt", 00:06:07.457 "params": { 00:06:07.457 "crdt1": 0, 00:06:07.457 "crdt2": 0, 00:06:07.457 "crdt3": 0 00:06:07.457 } 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "method": "nvmf_create_transport", 00:06:07.457 "params": { 00:06:07.457 "trtype": "TCP", 00:06:07.457 "max_queue_depth": 128, 00:06:07.457 "max_io_qpairs_per_ctrlr": 127, 00:06:07.457 "in_capsule_data_size": 4096, 00:06:07.457 "max_io_size": 131072, 00:06:07.457 "io_unit_size": 131072, 00:06:07.457 "max_aq_depth": 128, 00:06:07.457 "num_shared_buffers": 511, 00:06:07.457 "buf_cache_size": 4294967295, 00:06:07.457 "dif_insert_or_strip": false, 00:06:07.457 "zcopy": false, 00:06:07.457 "c2h_success": true, 00:06:07.457 "sock_priority": 0, 00:06:07.457 "abort_timeout_sec": 1, 00:06:07.457 "ack_timeout": 0, 00:06:07.457 "data_wr_pool_size": 0 00:06:07.457 } 00:06:07.457 } 00:06:07.457 ] 00:06:07.457 }, 00:06:07.457 { 00:06:07.457 "subsystem": "iscsi", 00:06:07.457 "config": [ 00:06:07.457 { 00:06:07.457 "method": "iscsi_set_options", 00:06:07.457 "params": { 00:06:07.457 "node_base": "iqn.2016-06.io.spdk", 00:06:07.457 "max_sessions": 128, 00:06:07.457 "max_connections_per_session": 2, 00:06:07.457 "max_queue_depth": 64, 00:06:07.457 "default_time2wait": 2, 00:06:07.457 "default_time2retain": 20, 00:06:07.457 "first_burst_length": 8192, 00:06:07.457 "immediate_data": true, 00:06:07.457 "allow_duplicated_isid": false, 00:06:07.457 "error_recovery_level": 0, 00:06:07.457 "nop_timeout": 60, 00:06:07.457 "nop_in_interval": 30, 00:06:07.457 "disable_chap": false, 00:06:07.457 "require_chap": false, 00:06:07.457 "mutual_chap": false, 00:06:07.457 "chap_group": 0, 00:06:07.457 "max_large_datain_per_connection": 64, 00:06:07.457 "max_r2t_per_connection": 4, 00:06:07.457 "pdu_pool_size": 36864, 00:06:07.457 "immediate_data_pool_size": 16384, 00:06:07.457 "data_out_pool_size": 2048 00:06:07.457 } 00:06:07.457 } 00:06:07.457 ] 00:06:07.457 } 00:06:07.457 ] 00:06:07.457 } 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58095 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58095 ']' 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58095 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58095 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.457 killing process with pid 58095 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58095' 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58095 00:06:07.457 08:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58095 00:06:09.996 08:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58140 00:06:09.996 08:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.996 08:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58140 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58140 ']' 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58140 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58140 00:06:15.275 killing process with pid 58140 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58140' 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58140 00:06:15.275 08:26:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58140 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:17.814 00:06:17.814 real 0m11.308s 00:06:17.814 user 0m10.707s 00:06:17.814 sys 0m0.920s 00:06:17.814 ************************************ 00:06:17.814 END TEST skip_rpc_with_json 00:06:17.814 ************************************ 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 08:26:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 ************************************ 00:06:17.814 START TEST skip_rpc_with_delay 00:06:17.814 ************************************ 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:17.814 [2024-11-22 08:26:52.532090] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.814 00:06:17.814 real 0m0.184s 00:06:17.814 user 0m0.084s 00:06:17.814 sys 0m0.098s 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.814 ************************************ 00:06:17.814 END TEST skip_rpc_with_delay 00:06:17.814 ************************************ 00:06:17.814 08:26:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 08:26:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:17.814 08:26:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:17.814 08:26:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.814 08:26:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.815 ************************************ 00:06:17.815 START TEST exit_on_failed_rpc_init 00:06:17.815 ************************************ 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58279 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58279 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58279 ']' 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.815 08:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.815 [2024-11-22 08:26:52.788941] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:17.815 [2024-11-22 08:26:52.789088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:06:18.073 [2024-11-22 08:26:52.971586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.074 [2024-11-22 08:26:53.085341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:19.047 08:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.047 [2024-11-22 08:26:54.033891] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:19.047 [2024-11-22 08:26:54.034222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:06:19.306 [2024-11-22 08:26:54.203611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.306 [2024-11-22 08:26:54.321068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.306 [2024-11-22 08:26:54.321350] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.306 [2024-11-22 08:26:54.321375] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.306 [2024-11-22 08:26:54.321396] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58279 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58279 ']' 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58279 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58279 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58279' 00:06:19.564 killing process with pid 58279 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58279 00:06:19.564 08:26:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58279 00:06:22.102 00:06:22.102 real 0m4.303s 00:06:22.102 user 0m4.594s 00:06:22.102 sys 0m0.613s 00:06:22.102 ************************************ 00:06:22.102 END TEST exit_on_failed_rpc_init 00:06:22.102 ************************************ 00:06:22.102 08:26:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.102 08:26:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.102 08:26:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:22.102 ************************************ 00:06:22.102 END TEST skip_rpc 00:06:22.102 ************************************ 00:06:22.102 00:06:22.102 real 0m23.772s 00:06:22.103 user 0m22.546s 00:06:22.103 sys 0m2.370s 00:06:22.103 08:26:57 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.103 08:26:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.103 08:26:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.103 08:26:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.103 08:26:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.103 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:22.103 ************************************ 00:06:22.103 START TEST rpc_client 00:06:22.103 ************************************ 00:06:22.103 08:26:57 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.362 * Looking for test storage... 00:06:22.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.362 08:26:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.362 --rc genhtml_branch_coverage=1 00:06:22.362 --rc genhtml_function_coverage=1 00:06:22.362 --rc genhtml_legend=1 00:06:22.362 --rc geninfo_all_blocks=1 00:06:22.362 --rc geninfo_unexecuted_blocks=1 00:06:22.362 00:06:22.362 ' 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.362 --rc genhtml_branch_coverage=1 00:06:22.362 --rc genhtml_function_coverage=1 00:06:22.362 --rc genhtml_legend=1 00:06:22.362 --rc geninfo_all_blocks=1 00:06:22.362 --rc geninfo_unexecuted_blocks=1 00:06:22.362 00:06:22.362 ' 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.362 --rc genhtml_branch_coverage=1 00:06:22.362 --rc genhtml_function_coverage=1 00:06:22.362 --rc genhtml_legend=1 00:06:22.362 --rc geninfo_all_blocks=1 00:06:22.362 --rc geninfo_unexecuted_blocks=1 00:06:22.362 00:06:22.362 ' 00:06:22.362 08:26:57 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.362 --rc genhtml_branch_coverage=1 00:06:22.362 --rc genhtml_function_coverage=1 00:06:22.362 --rc genhtml_legend=1 00:06:22.362 --rc geninfo_all_blocks=1 00:06:22.362 --rc geninfo_unexecuted_blocks=1 00:06:22.362 00:06:22.362 ' 00:06:22.362 08:26:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:22.362 OK 00:06:22.362 08:26:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:22.362 00:06:22.362 real 0m0.316s 00:06:22.362 user 0m0.182s 00:06:22.362 sys 0m0.147s 00:06:22.363 08:26:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.363 08:26:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:22.363 ************************************ 00:06:22.622 END TEST rpc_client 00:06:22.622 ************************************ 00:06:22.622 08:26:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.622 08:26:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.622 08:26:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.622 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:22.622 ************************************ 00:06:22.622 START TEST json_config 00:06:22.622 ************************************ 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.622 08:26:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.622 08:26:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.622 08:26:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.622 08:26:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.622 08:26:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.622 08:26:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:22.622 08:26:57 json_config -- scripts/common.sh@345 -- # : 1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.622 08:26:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.622 08:26:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@353 -- # local d=1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.622 08:26:57 json_config -- scripts/common.sh@355 -- # echo 1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.622 08:26:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@353 -- # local d=2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.622 08:26:57 json_config -- scripts/common.sh@355 -- # echo 2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.622 08:26:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.622 08:26:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.622 08:26:57 json_config -- scripts/common.sh@368 -- # return 0 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.622 --rc genhtml_branch_coverage=1 00:06:22.622 --rc genhtml_function_coverage=1 00:06:22.622 --rc genhtml_legend=1 00:06:22.622 --rc geninfo_all_blocks=1 00:06:22.622 --rc geninfo_unexecuted_blocks=1 00:06:22.622 00:06:22.622 ' 00:06:22.622 08:26:57 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.622 --rc genhtml_branch_coverage=1 00:06:22.622 --rc genhtml_function_coverage=1 00:06:22.623 --rc genhtml_legend=1 00:06:22.623 --rc geninfo_all_blocks=1 00:06:22.623 --rc geninfo_unexecuted_blocks=1 00:06:22.623 00:06:22.623 ' 00:06:22.623 08:26:57 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.623 --rc genhtml_branch_coverage=1 00:06:22.623 --rc genhtml_function_coverage=1 00:06:22.623 --rc genhtml_legend=1 00:06:22.623 --rc geninfo_all_blocks=1 00:06:22.623 --rc geninfo_unexecuted_blocks=1 00:06:22.623 00:06:22.623 ' 00:06:22.623 08:26:57 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.623 --rc genhtml_branch_coverage=1 00:06:22.623 --rc genhtml_function_coverage=1 00:06:22.623 --rc genhtml_legend=1 00:06:22.623 --rc geninfo_all_blocks=1 00:06:22.623 --rc geninfo_unexecuted_blocks=1 00:06:22.623 00:06:22.623 ' 00:06:22.623 08:26:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4694c2a8-1ece-45b9-bcc1-53b11818720f 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4694c2a8-1ece-45b9-bcc1-53b11818720f 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.883 08:26:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.883 08:26:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.883 08:26:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.883 08:26:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.883 08:26:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.883 08:26:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.883 08:26:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.883 08:26:57 json_config -- paths/export.sh@5 -- # export PATH 00:06:22.883 08:26:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@51 -- # : 0 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.883 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.883 08:26:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.883 WARNING: No tests are enabled so not running JSON configuration tests 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:22.883 08:26:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:22.883 00:06:22.883 real 0m0.236s 00:06:22.883 user 0m0.144s 00:06:22.883 sys 0m0.093s 00:06:22.883 08:26:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.883 08:26:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 ************************************ 00:06:22.883 END TEST json_config 00:06:22.883 ************************************ 00:06:22.883 08:26:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.883 08:26:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.883 08:26:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.883 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 ************************************ 00:06:22.883 START TEST json_config_extra_key 00:06:22.883 ************************************ 00:06:22.883 08:26:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.883 08:26:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.883 08:26:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.883 08:26:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.143 08:26:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.143 08:26:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.143 08:26:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.144 08:26:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.144 --rc genhtml_branch_coverage=1 00:06:23.144 --rc genhtml_function_coverage=1 00:06:23.144 --rc genhtml_legend=1 00:06:23.144 --rc geninfo_all_blocks=1 00:06:23.144 --rc geninfo_unexecuted_blocks=1 00:06:23.144 00:06:23.144 ' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.144 --rc genhtml_branch_coverage=1 00:06:23.144 --rc genhtml_function_coverage=1 00:06:23.144 --rc genhtml_legend=1 00:06:23.144 --rc geninfo_all_blocks=1 00:06:23.144 --rc geninfo_unexecuted_blocks=1 00:06:23.144 00:06:23.144 ' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.144 --rc genhtml_branch_coverage=1 00:06:23.144 --rc genhtml_function_coverage=1 00:06:23.144 --rc genhtml_legend=1 00:06:23.144 --rc geninfo_all_blocks=1 00:06:23.144 --rc geninfo_unexecuted_blocks=1 00:06:23.144 00:06:23.144 ' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.144 --rc genhtml_branch_coverage=1 00:06:23.144 --rc genhtml_function_coverage=1 00:06:23.144 --rc genhtml_legend=1 00:06:23.144 --rc geninfo_all_blocks=1 00:06:23.144 --rc geninfo_unexecuted_blocks=1 00:06:23.144 00:06:23.144 ' 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4694c2a8-1ece-45b9-bcc1-53b11818720f 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4694c2a8-1ece-45b9-bcc1-53b11818720f 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.144 08:26:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.144 08:26:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.144 08:26:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.144 08:26:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.144 08:26:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.144 08:26:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.144 08:26:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.144 08:26:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.144 08:26:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.144 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.144 08:26:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.144 INFO: launching applications... 00:06:23.144 08:26:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58507 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.144 Waiting for target to run... 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:23.144 08:26:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58507 /var/tmp/spdk_tgt.sock 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58507 ']' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.144 08:26:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.145 08:26:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.145 [2024-11-22 08:26:58.164904] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:23.145 [2024-11-22 08:26:58.165051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58507 ] 00:06:23.714 [2024-11-22 08:26:58.562015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.714 [2024-11-22 08:26:58.669956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.653 00:06:24.653 INFO: shutting down applications... 00:06:24.653 08:26:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.653 08:26:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:24.653 08:26:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:24.653 08:26:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58507 ]] 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58507 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:24.653 08:26:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.912 08:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.912 08:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.912 08:26:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:24.912 08:26:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.481 08:27:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.481 08:27:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.481 08:27:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:25.481 08:27:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.050 08:27:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.050 08:27:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.050 08:27:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:26.050 08:27:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.618 08:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.618 08:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.618 08:27:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:26.618 08:27:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.877 08:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.878 08:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.878 08:27:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:26.878 08:27:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58507 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.447 SPDK target shutdown done 00:06:27.447 Success 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.447 08:27:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.447 08:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.447 00:06:27.447 real 0m4.623s 00:06:27.447 user 0m4.060s 00:06:27.447 sys 0m0.612s 00:06:27.447 08:27:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.447 08:27:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.447 ************************************ 00:06:27.447 END TEST json_config_extra_key 00:06:27.447 ************************************ 00:06:27.447 08:27:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.447 08:27:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.447 08:27:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.447 08:27:02 -- common/autotest_common.sh@10 -- # set +x 00:06:27.447 ************************************ 00:06:27.447 START TEST alias_rpc 00:06:27.447 ************************************ 00:06:27.447 08:27:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.707 * Looking for test storage... 00:06:27.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:27.707 08:27:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.707 08:27:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.707 08:27:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.707 08:27:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.707 08:27:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.708 08:27:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.708 --rc genhtml_branch_coverage=1 00:06:27.708 --rc genhtml_function_coverage=1 00:06:27.708 --rc genhtml_legend=1 00:06:27.708 --rc geninfo_all_blocks=1 00:06:27.708 --rc geninfo_unexecuted_blocks=1 00:06:27.708 00:06:27.708 ' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.708 --rc genhtml_branch_coverage=1 00:06:27.708 --rc genhtml_function_coverage=1 00:06:27.708 --rc genhtml_legend=1 00:06:27.708 --rc geninfo_all_blocks=1 00:06:27.708 --rc geninfo_unexecuted_blocks=1 00:06:27.708 00:06:27.708 ' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.708 --rc genhtml_branch_coverage=1 00:06:27.708 --rc genhtml_function_coverage=1 00:06:27.708 --rc genhtml_legend=1 00:06:27.708 --rc geninfo_all_blocks=1 00:06:27.708 --rc geninfo_unexecuted_blocks=1 00:06:27.708 00:06:27.708 ' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.708 --rc genhtml_branch_coverage=1 00:06:27.708 --rc genhtml_function_coverage=1 00:06:27.708 --rc genhtml_legend=1 00:06:27.708 --rc geninfo_all_blocks=1 00:06:27.708 --rc geninfo_unexecuted_blocks=1 00:06:27.708 00:06:27.708 ' 00:06:27.708 08:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.708 08:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58619 00:06:27.708 08:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.708 08:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58619 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58619 ']' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.708 08:27:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 [2024-11-22 08:27:02.860521] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:27.968 [2024-11-22 08:27:02.860649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58619 ] 00:06:27.968 [2024-11-22 08:27:03.043937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.227 [2024-11-22 08:27:03.158431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.166 08:27:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.166 08:27:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.166 08:27:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:29.166 08:27:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58619 00:06:29.166 08:27:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58619 ']' 00:06:29.166 08:27:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58619 00:06:29.166 08:27:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58619 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.426 killing process with pid 58619 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58619' 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 58619 00:06:29.426 08:27:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 58619 00:06:31.962 ************************************ 00:06:31.962 END TEST alias_rpc 00:06:31.962 ************************************ 00:06:31.962 00:06:31.962 real 0m4.147s 00:06:31.962 user 0m4.114s 00:06:31.962 sys 0m0.615s 00:06:31.962 08:27:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.962 08:27:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.962 08:27:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:31.962 08:27:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:31.962 08:27:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.962 08:27:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.962 08:27:06 -- common/autotest_common.sh@10 -- # set +x 00:06:31.962 ************************************ 00:06:31.962 START TEST spdkcli_tcp 00:06:31.962 ************************************ 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:31.962 * Looking for test storage... 00:06:31.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.962 08:27:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.962 --rc genhtml_branch_coverage=1 00:06:31.962 --rc genhtml_function_coverage=1 00:06:31.962 --rc genhtml_legend=1 00:06:31.962 --rc geninfo_all_blocks=1 00:06:31.962 --rc geninfo_unexecuted_blocks=1 00:06:31.962 00:06:31.962 ' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.962 --rc genhtml_branch_coverage=1 00:06:31.962 --rc genhtml_function_coverage=1 00:06:31.962 --rc genhtml_legend=1 00:06:31.962 --rc geninfo_all_blocks=1 00:06:31.962 --rc geninfo_unexecuted_blocks=1 00:06:31.962 00:06:31.962 ' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.962 --rc genhtml_branch_coverage=1 00:06:31.962 --rc genhtml_function_coverage=1 00:06:31.962 --rc genhtml_legend=1 00:06:31.962 --rc geninfo_all_blocks=1 00:06:31.962 --rc geninfo_unexecuted_blocks=1 00:06:31.962 00:06:31.962 ' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.962 --rc genhtml_branch_coverage=1 00:06:31.962 --rc genhtml_function_coverage=1 00:06:31.962 --rc genhtml_legend=1 00:06:31.962 --rc geninfo_all_blocks=1 00:06:31.962 --rc geninfo_unexecuted_blocks=1 00:06:31.962 00:06:31.962 ' 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58726 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:31.962 08:27:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58726 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58726 ']' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.962 08:27:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.221 [2024-11-22 08:27:07.078904] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:32.221 [2024-11-22 08:27:07.079053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58726 ] 00:06:32.221 [2024-11-22 08:27:07.263446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.480 [2024-11-22 08:27:07.379728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.480 [2024-11-22 08:27:07.379777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.417 08:27:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.417 08:27:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:33.417 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58748 00:06:33.417 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:33.417 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:33.677 [ 00:06:33.677 "bdev_malloc_delete", 00:06:33.677 "bdev_malloc_create", 00:06:33.677 "bdev_null_resize", 00:06:33.677 "bdev_null_delete", 00:06:33.677 "bdev_null_create", 00:06:33.677 "bdev_nvme_cuse_unregister", 00:06:33.677 "bdev_nvme_cuse_register", 00:06:33.677 "bdev_opal_new_user", 00:06:33.677 "bdev_opal_set_lock_state", 00:06:33.677 "bdev_opal_delete", 00:06:33.677 "bdev_opal_get_info", 00:06:33.677 "bdev_opal_create", 00:06:33.677 "bdev_nvme_opal_revert", 00:06:33.677 "bdev_nvme_opal_init", 00:06:33.677 "bdev_nvme_send_cmd", 00:06:33.677 "bdev_nvme_set_keys", 00:06:33.677 "bdev_nvme_get_path_iostat", 00:06:33.677 "bdev_nvme_get_mdns_discovery_info", 00:06:33.677 "bdev_nvme_stop_mdns_discovery", 00:06:33.677 "bdev_nvme_start_mdns_discovery", 00:06:33.677 "bdev_nvme_set_multipath_policy", 00:06:33.677 "bdev_nvme_set_preferred_path", 00:06:33.677 "bdev_nvme_get_io_paths", 00:06:33.677 "bdev_nvme_remove_error_injection", 00:06:33.677 "bdev_nvme_add_error_injection", 00:06:33.677 "bdev_nvme_get_discovery_info", 00:06:33.677 "bdev_nvme_stop_discovery", 00:06:33.677 "bdev_nvme_start_discovery", 00:06:33.677 "bdev_nvme_get_controller_health_info", 00:06:33.677 "bdev_nvme_disable_controller", 00:06:33.677 "bdev_nvme_enable_controller", 00:06:33.677 "bdev_nvme_reset_controller", 00:06:33.677 "bdev_nvme_get_transport_statistics", 00:06:33.677 "bdev_nvme_apply_firmware", 00:06:33.677 "bdev_nvme_detach_controller", 00:06:33.677 "bdev_nvme_get_controllers", 00:06:33.677 "bdev_nvme_attach_controller", 00:06:33.677 "bdev_nvme_set_hotplug", 00:06:33.677 "bdev_nvme_set_options", 00:06:33.677 "bdev_passthru_delete", 00:06:33.677 "bdev_passthru_create", 00:06:33.677 "bdev_lvol_set_parent_bdev", 00:06:33.677 "bdev_lvol_set_parent", 00:06:33.677 "bdev_lvol_check_shallow_copy", 00:06:33.677 "bdev_lvol_start_shallow_copy", 00:06:33.677 "bdev_lvol_grow_lvstore", 00:06:33.677 "bdev_lvol_get_lvols", 00:06:33.677 "bdev_lvol_get_lvstores", 00:06:33.677 "bdev_lvol_delete", 00:06:33.677 "bdev_lvol_set_read_only", 00:06:33.677 "bdev_lvol_resize", 00:06:33.677 "bdev_lvol_decouple_parent", 00:06:33.677 "bdev_lvol_inflate", 00:06:33.677 "bdev_lvol_rename", 00:06:33.677 "bdev_lvol_clone_bdev", 00:06:33.677 "bdev_lvol_clone", 00:06:33.677 "bdev_lvol_snapshot", 00:06:33.677 "bdev_lvol_create", 00:06:33.677 "bdev_lvol_delete_lvstore", 00:06:33.677 "bdev_lvol_rename_lvstore", 00:06:33.677 "bdev_lvol_create_lvstore", 00:06:33.677 "bdev_raid_set_options", 00:06:33.677 "bdev_raid_remove_base_bdev", 00:06:33.677 "bdev_raid_add_base_bdev", 00:06:33.677 "bdev_raid_delete", 00:06:33.677 "bdev_raid_create", 00:06:33.677 "bdev_raid_get_bdevs", 00:06:33.677 "bdev_error_inject_error", 00:06:33.677 "bdev_error_delete", 00:06:33.677 "bdev_error_create", 00:06:33.677 "bdev_split_delete", 00:06:33.677 "bdev_split_create", 00:06:33.677 "bdev_delay_delete", 00:06:33.677 "bdev_delay_create", 00:06:33.677 "bdev_delay_update_latency", 00:06:33.677 "bdev_zone_block_delete", 00:06:33.677 "bdev_zone_block_create", 00:06:33.677 "blobfs_create", 00:06:33.677 "blobfs_detect", 00:06:33.677 "blobfs_set_cache_size", 00:06:33.677 "bdev_xnvme_delete", 00:06:33.677 "bdev_xnvme_create", 00:06:33.677 "bdev_aio_delete", 00:06:33.677 "bdev_aio_rescan", 00:06:33.677 "bdev_aio_create", 00:06:33.677 "bdev_ftl_set_property", 00:06:33.677 "bdev_ftl_get_properties", 00:06:33.677 "bdev_ftl_get_stats", 00:06:33.677 "bdev_ftl_unmap", 00:06:33.677 "bdev_ftl_unload", 00:06:33.677 "bdev_ftl_delete", 00:06:33.677 "bdev_ftl_load", 00:06:33.677 "bdev_ftl_create", 00:06:33.677 "bdev_virtio_attach_controller", 00:06:33.677 "bdev_virtio_scsi_get_devices", 00:06:33.677 "bdev_virtio_detach_controller", 00:06:33.677 "bdev_virtio_blk_set_hotplug", 00:06:33.677 "bdev_iscsi_delete", 00:06:33.677 "bdev_iscsi_create", 00:06:33.677 "bdev_iscsi_set_options", 00:06:33.677 "accel_error_inject_error", 00:06:33.677 "ioat_scan_accel_module", 00:06:33.677 "dsa_scan_accel_module", 00:06:33.677 "iaa_scan_accel_module", 00:06:33.677 "keyring_file_remove_key", 00:06:33.677 "keyring_file_add_key", 00:06:33.678 "keyring_linux_set_options", 00:06:33.678 "fsdev_aio_delete", 00:06:33.678 "fsdev_aio_create", 00:06:33.678 "iscsi_get_histogram", 00:06:33.678 "iscsi_enable_histogram", 00:06:33.678 "iscsi_set_options", 00:06:33.678 "iscsi_get_auth_groups", 00:06:33.678 "iscsi_auth_group_remove_secret", 00:06:33.678 "iscsi_auth_group_add_secret", 00:06:33.678 "iscsi_delete_auth_group", 00:06:33.678 "iscsi_create_auth_group", 00:06:33.678 "iscsi_set_discovery_auth", 00:06:33.678 "iscsi_get_options", 00:06:33.678 "iscsi_target_node_request_logout", 00:06:33.678 "iscsi_target_node_set_redirect", 00:06:33.678 "iscsi_target_node_set_auth", 00:06:33.678 "iscsi_target_node_add_lun", 00:06:33.678 "iscsi_get_stats", 00:06:33.678 "iscsi_get_connections", 00:06:33.678 "iscsi_portal_group_set_auth", 00:06:33.678 "iscsi_start_portal_group", 00:06:33.678 "iscsi_delete_portal_group", 00:06:33.678 "iscsi_create_portal_group", 00:06:33.678 "iscsi_get_portal_groups", 00:06:33.678 "iscsi_delete_target_node", 00:06:33.678 "iscsi_target_node_remove_pg_ig_maps", 00:06:33.678 "iscsi_target_node_add_pg_ig_maps", 00:06:33.678 "iscsi_create_target_node", 00:06:33.678 "iscsi_get_target_nodes", 00:06:33.678 "iscsi_delete_initiator_group", 00:06:33.678 "iscsi_initiator_group_remove_initiators", 00:06:33.678 "iscsi_initiator_group_add_initiators", 00:06:33.678 "iscsi_create_initiator_group", 00:06:33.678 "iscsi_get_initiator_groups", 00:06:33.678 "nvmf_set_crdt", 00:06:33.678 "nvmf_set_config", 00:06:33.678 "nvmf_set_max_subsystems", 00:06:33.678 "nvmf_stop_mdns_prr", 00:06:33.678 "nvmf_publish_mdns_prr", 00:06:33.678 "nvmf_subsystem_get_listeners", 00:06:33.678 "nvmf_subsystem_get_qpairs", 00:06:33.678 "nvmf_subsystem_get_controllers", 00:06:33.678 "nvmf_get_stats", 00:06:33.678 "nvmf_get_transports", 00:06:33.678 "nvmf_create_transport", 00:06:33.678 "nvmf_get_targets", 00:06:33.678 "nvmf_delete_target", 00:06:33.678 "nvmf_create_target", 00:06:33.678 "nvmf_subsystem_allow_any_host", 00:06:33.678 "nvmf_subsystem_set_keys", 00:06:33.678 "nvmf_subsystem_remove_host", 00:06:33.678 "nvmf_subsystem_add_host", 00:06:33.678 "nvmf_ns_remove_host", 00:06:33.678 "nvmf_ns_add_host", 00:06:33.678 "nvmf_subsystem_remove_ns", 00:06:33.678 "nvmf_subsystem_set_ns_ana_group", 00:06:33.678 "nvmf_subsystem_add_ns", 00:06:33.678 "nvmf_subsystem_listener_set_ana_state", 00:06:33.678 "nvmf_discovery_get_referrals", 00:06:33.678 "nvmf_discovery_remove_referral", 00:06:33.678 "nvmf_discovery_add_referral", 00:06:33.678 "nvmf_subsystem_remove_listener", 00:06:33.678 "nvmf_subsystem_add_listener", 00:06:33.678 "nvmf_delete_subsystem", 00:06:33.678 "nvmf_create_subsystem", 00:06:33.678 "nvmf_get_subsystems", 00:06:33.678 "env_dpdk_get_mem_stats", 00:06:33.678 "nbd_get_disks", 00:06:33.678 "nbd_stop_disk", 00:06:33.678 "nbd_start_disk", 00:06:33.678 "ublk_recover_disk", 00:06:33.678 "ublk_get_disks", 00:06:33.678 "ublk_stop_disk", 00:06:33.678 "ublk_start_disk", 00:06:33.678 "ublk_destroy_target", 00:06:33.678 "ublk_create_target", 00:06:33.678 "virtio_blk_create_transport", 00:06:33.678 "virtio_blk_get_transports", 00:06:33.678 "vhost_controller_set_coalescing", 00:06:33.678 "vhost_get_controllers", 00:06:33.678 "vhost_delete_controller", 00:06:33.678 "vhost_create_blk_controller", 00:06:33.678 "vhost_scsi_controller_remove_target", 00:06:33.678 "vhost_scsi_controller_add_target", 00:06:33.678 "vhost_start_scsi_controller", 00:06:33.678 "vhost_create_scsi_controller", 00:06:33.678 "thread_set_cpumask", 00:06:33.678 "scheduler_set_options", 00:06:33.678 "framework_get_governor", 00:06:33.678 "framework_get_scheduler", 00:06:33.678 "framework_set_scheduler", 00:06:33.678 "framework_get_reactors", 00:06:33.678 "thread_get_io_channels", 00:06:33.678 "thread_get_pollers", 00:06:33.678 "thread_get_stats", 00:06:33.678 "framework_monitor_context_switch", 00:06:33.678 "spdk_kill_instance", 00:06:33.678 "log_enable_timestamps", 00:06:33.678 "log_get_flags", 00:06:33.678 "log_clear_flag", 00:06:33.678 "log_set_flag", 00:06:33.678 "log_get_level", 00:06:33.678 "log_set_level", 00:06:33.678 "log_get_print_level", 00:06:33.678 "log_set_print_level", 00:06:33.678 "framework_enable_cpumask_locks", 00:06:33.678 "framework_disable_cpumask_locks", 00:06:33.678 "framework_wait_init", 00:06:33.678 "framework_start_init", 00:06:33.678 "scsi_get_devices", 00:06:33.678 "bdev_get_histogram", 00:06:33.678 "bdev_enable_histogram", 00:06:33.678 "bdev_set_qos_limit", 00:06:33.678 "bdev_set_qd_sampling_period", 00:06:33.678 "bdev_get_bdevs", 00:06:33.678 "bdev_reset_iostat", 00:06:33.678 "bdev_get_iostat", 00:06:33.678 "bdev_examine", 00:06:33.678 "bdev_wait_for_examine", 00:06:33.678 "bdev_set_options", 00:06:33.678 "accel_get_stats", 00:06:33.678 "accel_set_options", 00:06:33.678 "accel_set_driver", 00:06:33.678 "accel_crypto_key_destroy", 00:06:33.678 "accel_crypto_keys_get", 00:06:33.678 "accel_crypto_key_create", 00:06:33.678 "accel_assign_opc", 00:06:33.678 "accel_get_module_info", 00:06:33.678 "accel_get_opc_assignments", 00:06:33.678 "vmd_rescan", 00:06:33.678 "vmd_remove_device", 00:06:33.678 "vmd_enable", 00:06:33.678 "sock_get_default_impl", 00:06:33.678 "sock_set_default_impl", 00:06:33.678 "sock_impl_set_options", 00:06:33.678 "sock_impl_get_options", 00:06:33.678 "iobuf_get_stats", 00:06:33.678 "iobuf_set_options", 00:06:33.678 "keyring_get_keys", 00:06:33.678 "framework_get_pci_devices", 00:06:33.678 "framework_get_config", 00:06:33.678 "framework_get_subsystems", 00:06:33.678 "fsdev_set_opts", 00:06:33.678 "fsdev_get_opts", 00:06:33.678 "trace_get_info", 00:06:33.678 "trace_get_tpoint_group_mask", 00:06:33.678 "trace_disable_tpoint_group", 00:06:33.678 "trace_enable_tpoint_group", 00:06:33.678 "trace_clear_tpoint_mask", 00:06:33.678 "trace_set_tpoint_mask", 00:06:33.678 "notify_get_notifications", 00:06:33.678 "notify_get_types", 00:06:33.678 "spdk_get_version", 00:06:33.678 "rpc_get_methods" 00:06:33.678 ] 00:06:33.678 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.678 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:33.678 08:27:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58726 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58726 ']' 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58726 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58726 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58726' 00:06:33.678 killing process with pid 58726 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58726 00:06:33.678 08:27:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58726 00:06:36.222 00:06:36.222 real 0m4.506s 00:06:36.222 user 0m7.952s 00:06:36.222 sys 0m0.717s 00:06:36.222 08:27:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.222 ************************************ 00:06:36.222 08:27:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.222 END TEST spdkcli_tcp 00:06:36.222 ************************************ 00:06:36.490 08:27:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.490 08:27:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.490 08:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.490 08:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.490 ************************************ 00:06:36.490 START TEST dpdk_mem_utility 00:06:36.490 ************************************ 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.490 * Looking for test storage... 00:06:36.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.490 08:27:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.490 --rc genhtml_branch_coverage=1 00:06:36.490 --rc genhtml_function_coverage=1 00:06:36.490 --rc genhtml_legend=1 00:06:36.490 --rc geninfo_all_blocks=1 00:06:36.490 --rc geninfo_unexecuted_blocks=1 00:06:36.490 00:06:36.490 ' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.490 --rc genhtml_branch_coverage=1 00:06:36.490 --rc genhtml_function_coverage=1 00:06:36.490 --rc genhtml_legend=1 00:06:36.490 --rc geninfo_all_blocks=1 00:06:36.490 --rc geninfo_unexecuted_blocks=1 00:06:36.490 00:06:36.490 ' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.490 --rc genhtml_branch_coverage=1 00:06:36.490 --rc genhtml_function_coverage=1 00:06:36.490 --rc genhtml_legend=1 00:06:36.490 --rc geninfo_all_blocks=1 00:06:36.490 --rc geninfo_unexecuted_blocks=1 00:06:36.490 00:06:36.490 ' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.490 --rc genhtml_branch_coverage=1 00:06:36.490 --rc genhtml_function_coverage=1 00:06:36.490 --rc genhtml_legend=1 00:06:36.490 --rc geninfo_all_blocks=1 00:06:36.490 --rc geninfo_unexecuted_blocks=1 00:06:36.490 00:06:36.490 ' 00:06:36.490 08:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:36.490 08:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58853 00:06:36.490 08:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.490 08:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58853 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58853 ']' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.490 08:27:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.750 [2024-11-22 08:27:11.657476] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:36.750 [2024-11-22 08:27:11.657604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58853 ] 00:06:37.009 [2024-11-22 08:27:11.838994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.009 [2024-11-22 08:27:11.984886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.946 08:27:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.946 08:27:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:37.946 08:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:37.946 08:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:37.946 08:27:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.946 08:27:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.946 { 00:06:37.946 "filename": "/tmp/spdk_mem_dump.txt" 00:06:37.946 } 00:06:37.946 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.946 08:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:38.207 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:38.207 1 heaps totaling size 816.000000 MiB 00:06:38.207 size: 816.000000 MiB heap id: 0 00:06:38.207 end heaps---------- 00:06:38.207 9 mempools totaling size 595.772034 MiB 00:06:38.207 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:38.207 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:38.207 size: 92.545471 MiB name: bdev_io_58853 00:06:38.207 size: 50.003479 MiB name: msgpool_58853 00:06:38.207 size: 36.509338 MiB name: fsdev_io_58853 00:06:38.207 size: 21.763794 MiB name: PDU_Pool 00:06:38.207 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:38.207 size: 4.133484 MiB name: evtpool_58853 00:06:38.207 size: 0.026123 MiB name: Session_Pool 00:06:38.207 end mempools------- 00:06:38.207 6 memzones totaling size 4.142822 MiB 00:06:38.207 size: 1.000366 MiB name: RG_ring_0_58853 00:06:38.207 size: 1.000366 MiB name: RG_ring_1_58853 00:06:38.207 size: 1.000366 MiB name: RG_ring_4_58853 00:06:38.207 size: 1.000366 MiB name: RG_ring_5_58853 00:06:38.207 size: 0.125366 MiB name: RG_ring_2_58853 00:06:38.207 size: 0.015991 MiB name: RG_ring_3_58853 00:06:38.207 end memzones------- 00:06:38.207 08:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:38.207 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:06:38.207 list of free elements. size: 16.790649 MiB 00:06:38.207 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:38.207 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:38.207 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:38.207 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:38.207 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:38.207 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:38.207 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:38.207 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:38.207 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:38.207 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:38.207 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:38.207 element at address: 0x20001ac00000 with size: 0.561218 MiB 00:06:38.207 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:38.207 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:38.207 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:38.207 element at address: 0x200012c00000 with size: 0.443237 MiB 00:06:38.207 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:38.207 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:38.207 list of standard malloc elements. size: 199.288452 MiB 00:06:38.207 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:38.207 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:38.207 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:38.207 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:38.207 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:38.207 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:38.207 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:38.207 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:38.207 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:38.207 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:38.207 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:38.207 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:38.207 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:38.207 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71780 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:38.208 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:38.209 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:38.209 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:38.209 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:38.209 list of memzone associated elements. size: 599.920898 MiB 00:06:38.209 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:38.209 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:38.210 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:38.210 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:38.210 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:38.210 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58853_0 00:06:38.210 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:38.210 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58853_0 00:06:38.210 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:38.210 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58853_0 00:06:38.210 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:38.210 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:38.210 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:38.210 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:38.210 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:38.210 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58853_0 00:06:38.210 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:38.210 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58853 00:06:38.210 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:38.210 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58853 00:06:38.210 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:38.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:38.210 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:38.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:38.210 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:38.210 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:38.210 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:38.210 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:38.210 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:38.210 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58853 00:06:38.210 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:38.210 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58853 00:06:38.210 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:38.210 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58853 00:06:38.210 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:38.210 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58853 00:06:38.210 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:38.210 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58853 00:06:38.210 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:38.210 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58853 00:06:38.210 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:38.210 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:38.210 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:38.210 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:38.210 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:38.210 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:38.210 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:38.210 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58853 00:06:38.210 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:38.210 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58853 00:06:38.210 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:38.210 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:38.210 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:38.210 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:38.210 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:38.210 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58853 00:06:38.210 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:38.210 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:38.210 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:38.210 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58853 00:06:38.210 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:38.210 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58853 00:06:38.210 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:38.210 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58853 00:06:38.210 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:38.210 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:38.210 08:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:38.210 08:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58853 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58853 ']' 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58853 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58853 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.210 killing process with pid 58853 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58853' 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58853 00:06:38.210 08:27:13 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58853 00:06:40.746 00:06:40.746 real 0m4.234s 00:06:40.746 user 0m3.974s 00:06:40.746 sys 0m0.748s 00:06:40.746 08:27:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.746 ************************************ 00:06:40.746 END TEST dpdk_mem_utility 00:06:40.746 ************************************ 00:06:40.746 08:27:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.747 08:27:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:40.747 08:27:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.747 08:27:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.747 08:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:40.747 ************************************ 00:06:40.747 START TEST event 00:06:40.747 ************************************ 00:06:40.747 08:27:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:40.747 * Looking for test storage... 00:06:40.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:40.747 08:27:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.747 08:27:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.747 08:27:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.006 08:27:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.006 08:27:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.006 08:27:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.006 08:27:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.006 08:27:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.006 08:27:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.006 08:27:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.006 08:27:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.007 08:27:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.007 08:27:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.007 08:27:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.007 08:27:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.007 08:27:15 event -- scripts/common.sh@344 -- # case "$op" in 00:06:41.007 08:27:15 event -- scripts/common.sh@345 -- # : 1 00:06:41.007 08:27:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.007 08:27:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.007 08:27:15 event -- scripts/common.sh@365 -- # decimal 1 00:06:41.007 08:27:15 event -- scripts/common.sh@353 -- # local d=1 00:06:41.007 08:27:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.007 08:27:15 event -- scripts/common.sh@355 -- # echo 1 00:06:41.007 08:27:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.007 08:27:15 event -- scripts/common.sh@366 -- # decimal 2 00:06:41.007 08:27:15 event -- scripts/common.sh@353 -- # local d=2 00:06:41.007 08:27:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.007 08:27:15 event -- scripts/common.sh@355 -- # echo 2 00:06:41.007 08:27:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.007 08:27:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.007 08:27:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.007 08:27:15 event -- scripts/common.sh@368 -- # return 0 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.007 --rc genhtml_branch_coverage=1 00:06:41.007 --rc genhtml_function_coverage=1 00:06:41.007 --rc genhtml_legend=1 00:06:41.007 --rc geninfo_all_blocks=1 00:06:41.007 --rc geninfo_unexecuted_blocks=1 00:06:41.007 00:06:41.007 ' 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.007 --rc genhtml_branch_coverage=1 00:06:41.007 --rc genhtml_function_coverage=1 00:06:41.007 --rc genhtml_legend=1 00:06:41.007 --rc geninfo_all_blocks=1 00:06:41.007 --rc geninfo_unexecuted_blocks=1 00:06:41.007 00:06:41.007 ' 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.007 --rc genhtml_branch_coverage=1 00:06:41.007 --rc genhtml_function_coverage=1 00:06:41.007 --rc genhtml_legend=1 00:06:41.007 --rc geninfo_all_blocks=1 00:06:41.007 --rc geninfo_unexecuted_blocks=1 00:06:41.007 00:06:41.007 ' 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.007 --rc genhtml_branch_coverage=1 00:06:41.007 --rc genhtml_function_coverage=1 00:06:41.007 --rc genhtml_legend=1 00:06:41.007 --rc geninfo_all_blocks=1 00:06:41.007 --rc geninfo_unexecuted_blocks=1 00:06:41.007 00:06:41.007 ' 00:06:41.007 08:27:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.007 08:27:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.007 08:27:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:41.007 08:27:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.007 08:27:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.007 ************************************ 00:06:41.007 START TEST event_perf 00:06:41.007 ************************************ 00:06:41.007 08:27:15 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.007 Running I/O for 1 seconds...[2024-11-22 08:27:15.917840] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:41.007 [2024-11-22 08:27:15.917949] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:06:41.294 [2024-11-22 08:27:16.098256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.294 [2024-11-22 08:27:16.224856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.294 Running I/O for 1 seconds...[2024-11-22 08:27:16.225068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.294 [2024-11-22 08:27:16.225206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.294 [2024-11-22 08:27:16.225237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.715 00:06:42.715 lcore 0: 211788 00:06:42.715 lcore 1: 211786 00:06:42.715 lcore 2: 211786 00:06:42.715 lcore 3: 211786 00:06:42.715 done. 00:06:42.715 00:06:42.715 real 0m1.606s 00:06:42.715 user 0m4.355s 00:06:42.715 sys 0m0.130s 00:06:42.715 08:27:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.715 08:27:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.715 ************************************ 00:06:42.715 END TEST event_perf 00:06:42.715 ************************************ 00:06:42.715 08:27:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:42.715 08:27:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:42.715 08:27:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.715 08:27:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.715 ************************************ 00:06:42.715 START TEST event_reactor 00:06:42.715 ************************************ 00:06:42.715 08:27:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:42.715 [2024-11-22 08:27:17.597803] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:42.715 [2024-11-22 08:27:17.597921] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:06:42.715 [2024-11-22 08:27:17.779047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.974 [2024-11-22 08:27:17.896318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.352 test_start 00:06:44.352 oneshot 00:06:44.352 tick 100 00:06:44.352 tick 100 00:06:44.352 tick 250 00:06:44.352 tick 100 00:06:44.352 tick 100 00:06:44.352 tick 100 00:06:44.352 tick 250 00:06:44.352 tick 500 00:06:44.352 tick 100 00:06:44.352 tick 100 00:06:44.352 tick 250 00:06:44.352 tick 100 00:06:44.352 tick 100 00:06:44.352 test_end 00:06:44.352 00:06:44.352 real 0m1.575s 00:06:44.352 user 0m1.359s 00:06:44.352 sys 0m0.107s 00:06:44.352 08:27:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.352 08:27:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.352 ************************************ 00:06:44.352 END TEST event_reactor 00:06:44.352 ************************************ 00:06:44.352 08:27:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.352 08:27:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:44.352 08:27:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.352 08:27:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.352 ************************************ 00:06:44.352 START TEST event_reactor_perf 00:06:44.352 ************************************ 00:06:44.352 08:27:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.352 [2024-11-22 08:27:19.250172] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:44.352 [2024-11-22 08:27:19.250294] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 00:06:44.352 [2024-11-22 08:27:19.431350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.611 [2024-11-22 08:27:19.542778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.994 test_start 00:06:45.994 test_end 00:06:45.994 Performance: 382452 events per second 00:06:45.994 00:06:45.994 real 0m1.568s 00:06:45.994 user 0m1.351s 00:06:45.994 sys 0m0.109s 00:06:45.994 08:27:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.994 08:27:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.994 ************************************ 00:06:45.994 END TEST event_reactor_perf 00:06:45.994 ************************************ 00:06:45.994 08:27:20 event -- event/event.sh@49 -- # uname -s 00:06:45.994 08:27:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.994 08:27:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.994 08:27:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.994 08:27:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.994 08:27:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.994 ************************************ 00:06:45.994 START TEST event_scheduler 00:06:45.994 ************************************ 00:06:45.994 08:27:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.994 * Looking for test storage... 00:06:45.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:45.994 08:27:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.994 08:27:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.994 08:27:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.994 08:27:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.994 08:27:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.254 08:27:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.254 --rc genhtml_branch_coverage=1 00:06:46.254 --rc genhtml_function_coverage=1 00:06:46.254 --rc genhtml_legend=1 00:06:46.254 --rc geninfo_all_blocks=1 00:06:46.254 --rc geninfo_unexecuted_blocks=1 00:06:46.254 00:06:46.254 ' 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.254 --rc genhtml_branch_coverage=1 00:06:46.254 --rc genhtml_function_coverage=1 00:06:46.254 --rc genhtml_legend=1 00:06:46.254 --rc geninfo_all_blocks=1 00:06:46.254 --rc geninfo_unexecuted_blocks=1 00:06:46.254 00:06:46.254 ' 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.254 --rc genhtml_branch_coverage=1 00:06:46.254 --rc genhtml_function_coverage=1 00:06:46.254 --rc genhtml_legend=1 00:06:46.254 --rc geninfo_all_blocks=1 00:06:46.254 --rc geninfo_unexecuted_blocks=1 00:06:46.254 00:06:46.254 ' 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.254 --rc genhtml_branch_coverage=1 00:06:46.254 --rc genhtml_function_coverage=1 00:06:46.254 --rc genhtml_legend=1 00:06:46.254 --rc geninfo_all_blocks=1 00:06:46.254 --rc geninfo_unexecuted_blocks=1 00:06:46.254 00:06:46.254 ' 00:06:46.254 08:27:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.254 08:27:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59119 00:06:46.254 08:27:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.254 08:27:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.254 08:27:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59119 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.254 08:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.254 [2024-11-22 08:27:21.176346] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:46.254 [2024-11-22 08:27:21.176731] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:06:46.514 [2024-11-22 08:27:21.366094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.514 [2024-11-22 08:27:21.483611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.514 [2024-11-22 08:27:21.483700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.514 [2024-11-22 08:27:21.483835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.514 [2024-11-22 08:27:21.483869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.083 08:27:22 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.083 08:27:22 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:47.083 08:27:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:47.083 08:27:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.083 08:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.083 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:47.083 POWER: Cannot set governor of lcore 0 to userspace 00:06:47.083 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:47.083 POWER: Cannot set governor of lcore 0 to performance 00:06:47.083 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:47.083 POWER: Cannot set governor of lcore 0 to userspace 00:06:47.083 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:47.083 POWER: Cannot set governor of lcore 0 to userspace 00:06:47.083 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:47.083 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:47.083 POWER: Unable to set Power Management Environment for lcore 0 00:06:47.084 [2024-11-22 08:27:22.018852] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:47.084 [2024-11-22 08:27:22.019064] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:47.084 [2024-11-22 08:27:22.019154] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:47.084 [2024-11-22 08:27:22.019304] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:47.084 [2024-11-22 08:27:22.019502] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:47.084 [2024-11-22 08:27:22.019550] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:47.084 08:27:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.084 08:27:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:47.084 08:27:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.084 08:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 [2024-11-22 08:27:22.345178] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:47.343 08:27:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.343 08:27:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:47.343 08:27:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.343 08:27:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 ************************************ 00:06:47.343 START TEST scheduler_create_thread 00:06:47.343 ************************************ 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 2 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 3 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 4 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.343 5 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.343 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 6 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 7 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 8 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 9 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 10 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.603 08:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.981 08:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.981 08:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:48.981 08:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:48.981 08:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.981 08:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.550 08:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.550 08:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:49.550 08:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.550 08:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 08:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.488 08:27:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.488 08:27:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.488 08:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.488 08:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.426 ************************************ 00:06:51.426 END TEST scheduler_create_thread 00:06:51.426 ************************************ 00:06:51.426 08:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.426 00:06:51.426 real 0m3.883s 00:06:51.426 user 0m0.025s 00:06:51.426 sys 0m0.008s 00:06:51.426 08:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.426 08:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.426 08:27:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.426 08:27:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59119 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59119 ']' 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59119 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59119 00:06:51.426 killing process with pid 59119 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59119' 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59119 00:06:51.426 08:27:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59119 00:06:51.686 [2024-11-22 08:27:26.623538] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:53.066 ************************************ 00:06:53.066 END TEST event_scheduler 00:06:53.066 ************************************ 00:06:53.066 00:06:53.066 real 0m7.007s 00:06:53.066 user 0m14.411s 00:06:53.066 sys 0m0.555s 00:06:53.066 08:27:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.066 08:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.066 08:27:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:53.066 08:27:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:53.066 08:27:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.066 08:27:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.066 08:27:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.066 ************************************ 00:06:53.066 START TEST app_repeat 00:06:53.066 ************************************ 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:53.066 Process app_repeat pid: 59236 00:06:53.066 spdk_app_start Round 0 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59236 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59236' 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:53.066 08:27:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:06:53.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.066 08:27:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.066 [2024-11-22 08:27:28.018880] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:06:53.066 [2024-11-22 08:27:28.019185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59236 ] 00:06:53.326 [2024-11-22 08:27:28.202033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.326 [2024-11-22 08:27:28.320340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.326 [2024-11-22 08:27:28.320376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.895 08:27:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.895 08:27:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:53.895 08:27:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.154 Malloc0 00:06:54.154 08:27:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.413 Malloc1 00:06:54.413 08:27:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.413 08:27:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.673 /dev/nbd0 00:06:54.673 08:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.673 08:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.673 1+0 records in 00:06:54.673 1+0 records out 00:06:54.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378289 s, 10.8 MB/s 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.673 08:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.673 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.673 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.673 08:27:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.931 /dev/nbd1 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.931 1+0 records in 00:06:54.931 1+0 records out 00:06:54.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204229 s, 20.1 MB/s 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.931 08:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.931 08:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.191 { 00:06:55.191 "nbd_device": "/dev/nbd0", 00:06:55.191 "bdev_name": "Malloc0" 00:06:55.191 }, 00:06:55.191 { 00:06:55.191 "nbd_device": "/dev/nbd1", 00:06:55.191 "bdev_name": "Malloc1" 00:06:55.191 } 00:06:55.191 ]' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.191 { 00:06:55.191 "nbd_device": "/dev/nbd0", 00:06:55.191 "bdev_name": "Malloc0" 00:06:55.191 }, 00:06:55.191 { 00:06:55.191 "nbd_device": "/dev/nbd1", 00:06:55.191 "bdev_name": "Malloc1" 00:06:55.191 } 00:06:55.191 ]' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.191 /dev/nbd1' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.191 /dev/nbd1' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.191 256+0 records in 00:06:55.191 256+0 records out 00:06:55.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532354 s, 197 MB/s 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.191 08:27:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.192 256+0 records in 00:06:55.192 256+0 records out 00:06:55.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305173 s, 34.4 MB/s 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.192 256+0 records in 00:06:55.192 256+0 records out 00:06:55.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336137 s, 31.2 MB/s 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.192 08:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.451 08:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.711 08:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.971 08:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.971 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.971 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.971 08:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.971 08:27:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.971 08:27:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.540 08:27:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.478 [2024-11-22 08:27:32.528371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.738 [2024-11-22 08:27:32.635722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.738 [2024-11-22 08:27:32.635723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.997 [2024-11-22 08:27:32.828754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.997 [2024-11-22 08:27:32.828837] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.375 spdk_app_start Round 1 00:06:59.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.375 08:27:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.375 08:27:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:59.375 08:27:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.375 08:27:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 08:27:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.634 08:27:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.634 08:27:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.894 Malloc0 00:06:59.894 08:27:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.154 Malloc1 00:07:00.154 08:27:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.154 08:27:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.414 /dev/nbd0 00:07:00.414 08:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.414 08:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.414 1+0 records in 00:07:00.414 1+0 records out 00:07:00.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262689 s, 15.6 MB/s 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.414 08:27:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.414 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.414 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.414 08:27:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.674 /dev/nbd1 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.674 1+0 records in 00:07:00.674 1+0 records out 00:07:00.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435293 s, 9.4 MB/s 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.674 08:27:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.674 08:27:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.934 { 00:07:00.934 "nbd_device": "/dev/nbd0", 00:07:00.934 "bdev_name": "Malloc0" 00:07:00.934 }, 00:07:00.934 { 00:07:00.934 "nbd_device": "/dev/nbd1", 00:07:00.934 "bdev_name": "Malloc1" 00:07:00.934 } 00:07:00.934 ]' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.934 { 00:07:00.934 "nbd_device": "/dev/nbd0", 00:07:00.934 "bdev_name": "Malloc0" 00:07:00.934 }, 00:07:00.934 { 00:07:00.934 "nbd_device": "/dev/nbd1", 00:07:00.934 "bdev_name": "Malloc1" 00:07:00.934 } 00:07:00.934 ]' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.934 /dev/nbd1' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.934 /dev/nbd1' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.934 256+0 records in 00:07:00.934 256+0 records out 00:07:00.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123213 s, 85.1 MB/s 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.934 256+0 records in 00:07:00.934 256+0 records out 00:07:00.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301513 s, 34.8 MB/s 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.934 08:27:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.934 256+0 records in 00:07:00.934 256+0 records out 00:07:00.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031582 s, 33.2 MB/s 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.934 08:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.194 08:27:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.454 08:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.714 08:27:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.714 08:27:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.314 08:27:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.252 [2024-11-22 08:27:38.247076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.511 [2024-11-22 08:27:38.351250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.511 [2024-11-22 08:27:38.351271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.511 [2024-11-22 08:27:38.543893] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.511 [2024-11-22 08:27:38.543973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.418 spdk_app_start Round 2 00:07:05.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.418 08:27:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.418 08:27:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:05.418 08:27:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:07:05.418 08:27:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:07:05.418 08:27:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.418 08:27:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.418 08:27:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.418 08:27:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.419 08:27:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.419 08:27:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.419 08:27:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:05.419 08:27:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.677 Malloc0 00:07:05.677 08:27:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.936 Malloc1 00:07:05.936 08:27:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.936 08:27:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.195 /dev/nbd0 00:07:06.195 08:27:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.195 08:27:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.195 1+0 records in 00:07:06.195 1+0 records out 00:07:06.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331088 s, 12.4 MB/s 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.195 08:27:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.195 08:27:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.195 08:27:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.195 08:27:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.454 /dev/nbd1 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.454 1+0 records in 00:07:06.454 1+0 records out 00:07:06.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412439 s, 9.9 MB/s 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.454 08:27:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.454 08:27:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.712 { 00:07:06.712 "nbd_device": "/dev/nbd0", 00:07:06.712 "bdev_name": "Malloc0" 00:07:06.712 }, 00:07:06.712 { 00:07:06.712 "nbd_device": "/dev/nbd1", 00:07:06.712 "bdev_name": "Malloc1" 00:07:06.712 } 00:07:06.712 ]' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.712 { 00:07:06.712 "nbd_device": "/dev/nbd0", 00:07:06.712 "bdev_name": "Malloc0" 00:07:06.712 }, 00:07:06.712 { 00:07:06.712 "nbd_device": "/dev/nbd1", 00:07:06.712 "bdev_name": "Malloc1" 00:07:06.712 } 00:07:06.712 ]' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.712 /dev/nbd1' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.712 /dev/nbd1' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.712 256+0 records in 00:07:06.712 256+0 records out 00:07:06.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718942 s, 146 MB/s 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.712 256+0 records in 00:07:06.712 256+0 records out 00:07:06.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254823 s, 41.1 MB/s 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.712 256+0 records in 00:07:06.712 256+0 records out 00:07:06.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030022 s, 34.9 MB/s 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.712 08:27:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.970 08:27:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.227 08:27:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.486 08:27:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.486 08:27:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.053 08:27:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.989 [2024-11-22 08:27:44.015005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.248 [2024-11-22 08:27:44.122312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.248 [2024-11-22 08:27:44.122313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.248 [2024-11-22 08:27:44.308844] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.248 [2024-11-22 08:27:44.308927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:11.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.151 08:27:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.151 08:27:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:11.151 08:27:46 event.app_repeat -- event/event.sh@39 -- # killprocess 59236 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59236 ']' 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59236 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59236 00:07:11.151 killing process with pid 59236 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59236' 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59236 00:07:11.151 08:27:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59236 00:07:12.087 spdk_app_start is called in Round 0. 00:07:12.087 Shutdown signal received, stop current app iteration 00:07:12.087 Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 reinitialization... 00:07:12.087 spdk_app_start is called in Round 1. 00:07:12.087 Shutdown signal received, stop current app iteration 00:07:12.087 Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 reinitialization... 00:07:12.087 spdk_app_start is called in Round 2. 00:07:12.087 Shutdown signal received, stop current app iteration 00:07:12.087 Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 reinitialization... 00:07:12.087 spdk_app_start is called in Round 3. 00:07:12.087 Shutdown signal received, stop current app iteration 00:07:12.087 ************************************ 00:07:12.087 END TEST app_repeat 00:07:12.087 ************************************ 00:07:12.087 08:27:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:12.087 08:27:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:12.087 00:07:12.087 real 0m19.202s 00:07:12.087 user 0m40.635s 00:07:12.087 sys 0m3.209s 00:07:12.087 08:27:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.087 08:27:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.345 08:27:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:12.345 08:27:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:12.345 08:27:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.345 08:27:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.345 08:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.345 ************************************ 00:07:12.345 START TEST cpu_locks 00:07:12.345 ************************************ 00:07:12.345 08:27:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:12.345 * Looking for test storage... 00:07:12.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:12.345 08:27:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.345 08:27:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.345 08:27:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.605 08:27:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:12.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.605 --rc genhtml_branch_coverage=1 00:07:12.605 --rc genhtml_function_coverage=1 00:07:12.605 --rc genhtml_legend=1 00:07:12.605 --rc geninfo_all_blocks=1 00:07:12.605 --rc geninfo_unexecuted_blocks=1 00:07:12.605 00:07:12.605 ' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:12.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.605 --rc genhtml_branch_coverage=1 00:07:12.605 --rc genhtml_function_coverage=1 00:07:12.605 --rc genhtml_legend=1 00:07:12.605 --rc geninfo_all_blocks=1 00:07:12.605 --rc geninfo_unexecuted_blocks=1 00:07:12.605 00:07:12.605 ' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:12.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.605 --rc genhtml_branch_coverage=1 00:07:12.605 --rc genhtml_function_coverage=1 00:07:12.605 --rc genhtml_legend=1 00:07:12.605 --rc geninfo_all_blocks=1 00:07:12.605 --rc geninfo_unexecuted_blocks=1 00:07:12.605 00:07:12.605 ' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:12.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.605 --rc genhtml_branch_coverage=1 00:07:12.605 --rc genhtml_function_coverage=1 00:07:12.605 --rc genhtml_legend=1 00:07:12.605 --rc geninfo_all_blocks=1 00:07:12.605 --rc geninfo_unexecuted_blocks=1 00:07:12.605 00:07:12.605 ' 00:07:12.605 08:27:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:12.605 08:27:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:12.605 08:27:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:12.605 08:27:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.605 08:27:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.605 ************************************ 00:07:12.605 START TEST default_locks 00:07:12.605 ************************************ 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59683 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59683 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59683 ']' 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.605 08:27:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.605 [2024-11-22 08:27:47.579833] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:12.605 [2024-11-22 08:27:47.579982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59683 ] 00:07:12.864 [2024-11-22 08:27:47.760683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.864 [2024-11-22 08:27:47.871433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.799 08:27:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.799 08:27:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:13.799 08:27:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59683 00:07:13.799 08:27:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59683 00:07:13.799 08:27:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59683 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59683 ']' 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59683 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59683 00:07:14.366 killing process with pid 59683 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59683' 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59683 00:07:14.366 08:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59683 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59683 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59683 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.900 ERROR: process (pid: 59683) is no longer running 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.900 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59683 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59683 ']' 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59683) - No such process 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:16.901 ************************************ 00:07:16.901 END TEST default_locks 00:07:16.901 ************************************ 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:16.901 00:07:16.901 real 0m4.089s 00:07:16.901 user 0m4.052s 00:07:16.901 sys 0m0.684s 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.901 08:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.901 08:27:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:16.901 08:27:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.901 08:27:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.901 08:27:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.901 ************************************ 00:07:16.901 START TEST default_locks_via_rpc 00:07:16.901 ************************************ 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59758 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59758 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59758 ']' 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.901 08:27:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.901 [2024-11-22 08:27:51.744406] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:16.901 [2024-11-22 08:27:51.744718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59758 ] 00:07:16.901 [2024-11-22 08:27:51.927714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.160 [2024-11-22 08:27:52.035774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59758 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59758 00:07:18.097 08:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59758 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59758 ']' 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59758 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.357 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59758 00:07:18.661 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.661 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.661 killing process with pid 59758 00:07:18.661 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59758' 00:07:18.661 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59758 00:07:18.661 08:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59758 00:07:21.198 00:07:21.198 real 0m4.189s 00:07:21.198 user 0m4.133s 00:07:21.198 sys 0m0.706s 00:07:21.198 08:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.198 ************************************ 00:07:21.198 END TEST default_locks_via_rpc 00:07:21.199 08:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 ************************************ 00:07:21.199 08:27:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.199 08:27:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.199 08:27:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.199 08:27:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 ************************************ 00:07:21.199 START TEST non_locking_app_on_locked_coremask 00:07:21.199 ************************************ 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59834 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59834 /var/tmp/spdk.sock 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59834 ']' 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.199 08:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.199 [2024-11-22 08:27:56.005205] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:21.199 [2024-11-22 08:27:56.005832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:07:21.199 [2024-11-22 08:27:56.185893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.458 [2024-11-22 08:27:56.296211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59851 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59851 /var/tmp/spdk2.sock 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59851 ']' 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.396 08:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.396 [2024-11-22 08:27:57.275615] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:22.396 [2024-11-22 08:27:57.276130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:07:22.396 [2024-11-22 08:27:57.460531] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.396 [2024-11-22 08:27:57.460584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.655 [2024-11-22 08:27:57.699158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.193 08:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.193 08:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.193 08:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59834 00:07:25.193 08:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59834 00:07:25.193 08:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59834 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59834 ']' 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59834 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59834 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59834' 00:07:25.762 killing process with pid 59834 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59834 00:07:25.762 08:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59834 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59851 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59851 ']' 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59851 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59851 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.038 killing process with pid 59851 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59851' 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59851 00:07:31.038 08:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59851 00:07:32.971 00:07:32.971 real 0m11.948s 00:07:32.971 user 0m12.277s 00:07:32.972 sys 0m1.366s 00:07:32.972 08:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.972 ************************************ 00:07:32.972 END TEST non_locking_app_on_locked_coremask 00:07:32.972 ************************************ 00:07:32.972 08:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 08:28:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:32.972 08:28:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.972 08:28:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.972 08:28:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 ************************************ 00:07:32.972 START TEST locking_app_on_unlocked_coremask 00:07:32.972 ************************************ 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60009 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60009 /var/tmp/spdk.sock 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 08:28:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:32.972 [2024-11-22 08:28:08.028589] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:32.972 [2024-11-22 08:28:08.028718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60009 ] 00:07:33.231 [2024-11-22 08:28:08.211949] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.231 [2024-11-22 08:28:08.212021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.490 [2024-11-22 08:28:08.327805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60025 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60025 /var/tmp/spdk2.sock 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60025 ']' 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.428 08:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.428 [2024-11-22 08:28:09.296322] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:34.428 [2024-11-22 08:28:09.296454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:07:34.428 [2024-11-22 08:28:09.478118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.687 [2024-11-22 08:28:09.722054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.226 08:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.226 08:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.226 08:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60025 00:07:37.226 08:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60025 00:07:37.226 08:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60009 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60009 ']' 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60009 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60009 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.794 killing process with pid 60009 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60009' 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60009 00:07:37.794 08:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60009 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60025 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60025 ']' 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60025 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60025 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.071 killing process with pid 60025 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60025' 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60025 00:07:43.071 08:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60025 00:07:44.974 00:07:44.974 real 0m12.029s 00:07:44.974 user 0m12.392s 00:07:44.974 sys 0m1.347s 00:07:44.974 ************************************ 00:07:44.974 END TEST locking_app_on_unlocked_coremask 00:07:44.974 ************************************ 00:07:44.974 08:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.974 08:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.974 08:28:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:44.974 08:28:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.974 08:28:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.974 08:28:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.974 ************************************ 00:07:44.974 START TEST locking_app_on_locked_coremask 00:07:44.974 ************************************ 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60180 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60180 /var/tmp/spdk.sock 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60180 ']' 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.974 08:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.233 [2024-11-22 08:28:20.123573] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:45.233 [2024-11-22 08:28:20.123697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60180 ] 00:07:45.233 [2024-11-22 08:28:20.299800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.491 [2024-11-22 08:28:20.421596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60201 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60201 /var/tmp/spdk2.sock 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60201 /var/tmp/spdk2.sock 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.429 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60201 /var/tmp/spdk2.sock 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60201 ']' 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.430 08:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.430 [2024-11-22 08:28:21.436523] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:46.430 [2024-11-22 08:28:21.436656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:07:46.689 [2024-11-22 08:28:21.621845] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60180 has claimed it. 00:07:46.689 [2024-11-22 08:28:21.621924] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.258 ERROR: process (pid: 60201) is no longer running 00:07:47.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60201) - No such process 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60180 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60180 00:07:47.258 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60180 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60180 ']' 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60180 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60180 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.517 killing process with pid 60180 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60180' 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60180 00:07:47.517 08:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60180 00:07:50.056 00:07:50.056 real 0m4.915s 00:07:50.056 user 0m5.074s 00:07:50.056 sys 0m0.862s 00:07:50.056 08:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.056 08:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.056 ************************************ 00:07:50.056 END TEST locking_app_on_locked_coremask 00:07:50.056 ************************************ 00:07:50.056 08:28:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:50.056 08:28:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.056 08:28:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.056 08:28:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.056 ************************************ 00:07:50.056 START TEST locking_overlapped_coremask 00:07:50.056 ************************************ 00:07:50.056 08:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:50.056 08:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60271 00:07:50.056 08:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:50.056 08:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60271 /var/tmp/spdk.sock 00:07:50.056 08:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60271 ']' 00:07:50.056 08:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.056 08:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.056 08:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.056 08:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.056 08:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.056 [2024-11-22 08:28:25.107483] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:50.056 [2024-11-22 08:28:25.107616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:07:50.316 [2024-11-22 08:28:25.289193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.576 [2024-11-22 08:28:25.403641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.576 [2024-11-22 08:28:25.403782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.576 [2024-11-22 08:28:25.403814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60289 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60289 /var/tmp/spdk2.sock 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60289 /var/tmp/spdk2.sock 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:51.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60289 /var/tmp/spdk2.sock 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60289 ']' 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.512 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.512 [2024-11-22 08:28:26.392744] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:51.512 [2024-11-22 08:28:26.393110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:07:51.512 [2024-11-22 08:28:26.577634] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60271 has claimed it. 00:07:51.512 [2024-11-22 08:28:26.577709] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:52.095 ERROR: process (pid: 60289) is no longer running 00:07:52.095 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60289) - No such process 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60271 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60271 ']' 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60271 00:07:52.095 08:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60271 00:07:52.095 killing process with pid 60271 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60271' 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60271 00:07:52.095 08:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60271 00:07:54.634 ************************************ 00:07:54.634 END TEST locking_overlapped_coremask 00:07:54.634 ************************************ 00:07:54.634 00:07:54.634 real 0m4.456s 00:07:54.634 user 0m12.054s 00:07:54.634 sys 0m0.643s 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.634 08:28:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:54.634 08:28:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.634 08:28:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.634 08:28:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.634 ************************************ 00:07:54.634 START TEST locking_overlapped_coremask_via_rpc 00:07:54.634 ************************************ 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60353 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60353 /var/tmp/spdk.sock 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60353 ']' 00:07:54.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.634 08:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.634 [2024-11-22 08:28:29.643476] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:54.634 [2024-11-22 08:28:29.643598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60353 ] 00:07:54.895 [2024-11-22 08:28:29.820082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.895 [2024-11-22 08:28:29.820131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.895 [2024-11-22 08:28:29.944107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.895 [2024-11-22 08:28:29.944244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.895 [2024-11-22 08:28:29.944274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60376 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60376 /var/tmp/spdk2.sock 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.834 08:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.094 [2024-11-22 08:28:30.922881] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:07:56.094 [2024-11-22 08:28:30.923741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:07:56.094 [2024-11-22 08:28:31.109100] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.094 [2024-11-22 08:28:31.109153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.353 [2024-11-22 08:28:31.346001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.353 [2024-11-22 08:28:31.346100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.353 [2024-11-22 08:28:31.346132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.892 [2024-11-22 08:28:33.567201] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60353 has claimed it. 00:07:58.892 request: 00:07:58.892 { 00:07:58.892 "method": "framework_enable_cpumask_locks", 00:07:58.892 "req_id": 1 00:07:58.892 } 00:07:58.892 Got JSON-RPC error response 00:07:58.892 response: 00:07:58.892 { 00:07:58.892 "code": -32603, 00:07:58.892 "message": "Failed to claim CPU core: 2" 00:07:58.892 } 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60353 /var/tmp/spdk.sock 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60353 ']' 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60376 /var/tmp/spdk2.sock 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.892 08:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:59.152 00:07:59.152 real 0m4.487s 00:07:59.152 user 0m1.322s 00:07:59.152 sys 0m0.229s 00:07:59.152 ************************************ 00:07:59.152 END TEST locking_overlapped_coremask_via_rpc 00:07:59.152 ************************************ 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.152 08:28:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.152 08:28:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:59.152 08:28:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60353 ]] 00:07:59.152 08:28:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60353 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60353 ']' 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60353 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60353 00:07:59.152 killing process with pid 60353 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60353' 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60353 00:07:59.152 08:28:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60353 00:08:01.690 08:28:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60376 ]] 00:08:01.690 08:28:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60376 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60376 ']' 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60376 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60376 00:08:01.690 killing process with pid 60376 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:01.690 08:28:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60376' 00:08:01.691 08:28:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60376 00:08:01.691 08:28:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60376 00:08:04.233 08:28:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60353 ]] 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60353 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60353 ']' 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60353 00:08:04.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60353) - No such process 00:08:04.233 Process with pid 60353 is not found 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60353 is not found' 00:08:04.233 Process with pid 60376 is not found 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60376 ]] 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60376 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60376 ']' 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60376 00:08:04.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60376) - No such process 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60376 is not found' 00:08:04.233 08:28:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.233 00:08:04.233 real 0m51.791s 00:08:04.233 user 1m27.842s 00:08:04.233 sys 0m7.121s 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.233 ************************************ 00:08:04.233 END TEST cpu_locks 00:08:04.233 ************************************ 00:08:04.233 08:28:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 ************************************ 00:08:04.233 END TEST event 00:08:04.233 ************************************ 00:08:04.233 00:08:04.233 real 1m23.447s 00:08:04.233 user 2m30.220s 00:08:04.233 sys 0m11.649s 00:08:04.233 08:28:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.233 08:28:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 08:28:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.233 08:28:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.233 08:28:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.233 08:28:39 -- common/autotest_common.sh@10 -- # set +x 00:08:04.233 ************************************ 00:08:04.233 START TEST thread 00:08:04.233 ************************************ 00:08:04.233 08:28:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.233 * Looking for test storage... 00:08:04.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:04.233 08:28:39 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.233 08:28:39 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.233 08:28:39 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.492 08:28:39 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.492 08:28:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.492 08:28:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.492 08:28:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.492 08:28:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.492 08:28:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.492 08:28:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.492 08:28:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.492 08:28:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.492 08:28:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.492 08:28:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.492 08:28:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.492 08:28:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:04.492 08:28:39 thread -- scripts/common.sh@345 -- # : 1 00:08:04.492 08:28:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.492 08:28:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.492 08:28:39 thread -- scripts/common.sh@365 -- # decimal 1 00:08:04.492 08:28:39 thread -- scripts/common.sh@353 -- # local d=1 00:08:04.492 08:28:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.492 08:28:39 thread -- scripts/common.sh@355 -- # echo 1 00:08:04.492 08:28:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.492 08:28:39 thread -- scripts/common.sh@366 -- # decimal 2 00:08:04.492 08:28:39 thread -- scripts/common.sh@353 -- # local d=2 00:08:04.492 08:28:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.492 08:28:39 thread -- scripts/common.sh@355 -- # echo 2 00:08:04.492 08:28:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.492 08:28:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.492 08:28:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.492 08:28:39 thread -- scripts/common.sh@368 -- # return 0 00:08:04.492 08:28:39 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.492 08:28:39 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.492 --rc genhtml_branch_coverage=1 00:08:04.492 --rc genhtml_function_coverage=1 00:08:04.492 --rc genhtml_legend=1 00:08:04.492 --rc geninfo_all_blocks=1 00:08:04.492 --rc geninfo_unexecuted_blocks=1 00:08:04.492 00:08:04.492 ' 00:08:04.492 08:28:39 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.492 --rc genhtml_branch_coverage=1 00:08:04.492 --rc genhtml_function_coverage=1 00:08:04.492 --rc genhtml_legend=1 00:08:04.492 --rc geninfo_all_blocks=1 00:08:04.492 --rc geninfo_unexecuted_blocks=1 00:08:04.492 00:08:04.492 ' 00:08:04.492 08:28:39 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.492 --rc genhtml_branch_coverage=1 00:08:04.493 --rc genhtml_function_coverage=1 00:08:04.493 --rc genhtml_legend=1 00:08:04.493 --rc geninfo_all_blocks=1 00:08:04.493 --rc geninfo_unexecuted_blocks=1 00:08:04.493 00:08:04.493 ' 00:08:04.493 08:28:39 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.493 --rc genhtml_branch_coverage=1 00:08:04.493 --rc genhtml_function_coverage=1 00:08:04.493 --rc genhtml_legend=1 00:08:04.493 --rc geninfo_all_blocks=1 00:08:04.493 --rc geninfo_unexecuted_blocks=1 00:08:04.493 00:08:04.493 ' 00:08:04.493 08:28:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.493 08:28:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:04.493 08:28:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.493 08:28:39 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.493 ************************************ 00:08:04.493 START TEST thread_poller_perf 00:08:04.493 ************************************ 00:08:04.493 08:28:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.493 [2024-11-22 08:28:39.447855] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:04.493 [2024-11-22 08:28:39.448243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:08:04.752 [2024-11-22 08:28:39.632863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.752 [2024-11-22 08:28:39.749425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.752 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:06.132 [2024-11-22T08:28:41.219Z] ====================================== 00:08:06.132 [2024-11-22T08:28:41.219Z] busy:2498140248 (cyc) 00:08:06.132 [2024-11-22T08:28:41.219Z] total_run_count: 388000 00:08:06.132 [2024-11-22T08:28:41.219Z] tsc_hz: 2490000000 (cyc) 00:08:06.132 [2024-11-22T08:28:41.219Z] ====================================== 00:08:06.132 [2024-11-22T08:28:41.219Z] poller_cost: 6438 (cyc), 2585 (nsec) 00:08:06.132 00:08:06.132 real 0m1.591s 00:08:06.132 user 0m1.364s 00:08:06.132 sys 0m0.119s 00:08:06.132 08:28:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.132 08:28:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.133 ************************************ 00:08:06.133 END TEST thread_poller_perf 00:08:06.133 ************************************ 00:08:06.133 08:28:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.133 08:28:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:06.133 08:28:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.133 08:28:41 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.133 ************************************ 00:08:06.133 START TEST thread_poller_perf 00:08:06.133 ************************************ 00:08:06.133 08:28:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.133 [2024-11-22 08:28:41.113216] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:06.133 [2024-11-22 08:28:41.113345] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:08:06.392 [2024-11-22 08:28:41.293788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.392 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.392 [2024-11-22 08:28:41.406909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.771 [2024-11-22T08:28:42.858Z] ====================================== 00:08:07.771 [2024-11-22T08:28:42.858Z] busy:2493757386 (cyc) 00:08:07.771 [2024-11-22T08:28:42.858Z] total_run_count: 5169000 00:08:07.771 [2024-11-22T08:28:42.858Z] tsc_hz: 2490000000 (cyc) 00:08:07.771 [2024-11-22T08:28:42.858Z] ====================================== 00:08:07.771 [2024-11-22T08:28:42.858Z] poller_cost: 482 (cyc), 193 (nsec) 00:08:07.771 00:08:07.771 real 0m1.575s 00:08:07.771 user 0m1.350s 00:08:07.771 sys 0m0.118s 00:08:07.771 08:28:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.771 08:28:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.771 ************************************ 00:08:07.771 END TEST thread_poller_perf 00:08:07.771 ************************************ 00:08:07.771 08:28:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:07.771 00:08:07.771 real 0m3.552s 00:08:07.771 user 0m2.888s 00:08:07.771 sys 0m0.455s 00:08:07.771 ************************************ 00:08:07.771 END TEST thread 00:08:07.771 ************************************ 00:08:07.771 08:28:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.771 08:28:42 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.771 08:28:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:07.771 08:28:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:07.771 08:28:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.771 08:28:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.771 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:08:07.771 ************************************ 00:08:07.771 START TEST app_cmdline 00:08:07.771 ************************************ 00:08:07.771 08:28:42 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:08.031 * Looking for test storage... 00:08:08.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.031 08:28:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.031 --rc genhtml_branch_coverage=1 00:08:08.031 --rc genhtml_function_coverage=1 00:08:08.031 --rc genhtml_legend=1 00:08:08.031 --rc geninfo_all_blocks=1 00:08:08.031 --rc geninfo_unexecuted_blocks=1 00:08:08.031 00:08:08.031 ' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.031 --rc genhtml_branch_coverage=1 00:08:08.031 --rc genhtml_function_coverage=1 00:08:08.031 --rc genhtml_legend=1 00:08:08.031 --rc geninfo_all_blocks=1 00:08:08.031 --rc geninfo_unexecuted_blocks=1 00:08:08.031 00:08:08.031 ' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.031 --rc genhtml_branch_coverage=1 00:08:08.031 --rc genhtml_function_coverage=1 00:08:08.031 --rc genhtml_legend=1 00:08:08.031 --rc geninfo_all_blocks=1 00:08:08.031 --rc geninfo_unexecuted_blocks=1 00:08:08.031 00:08:08.031 ' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.031 --rc genhtml_branch_coverage=1 00:08:08.031 --rc genhtml_function_coverage=1 00:08:08.031 --rc genhtml_legend=1 00:08:08.031 --rc geninfo_all_blocks=1 00:08:08.031 --rc geninfo_unexecuted_blocks=1 00:08:08.031 00:08:08.031 ' 00:08:08.031 08:28:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:08.031 08:28:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60697 00:08:08.031 08:28:42 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:08.031 08:28:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60697 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60697 ']' 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.031 08:28:42 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.032 08:28:42 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.032 08:28:42 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.032 08:28:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.032 [2024-11-22 08:28:43.094618] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:08.032 [2024-11-22 08:28:43.094947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60697 ] 00:08:08.291 [2024-11-22 08:28:43.278010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.550 [2024-11-22 08:28:43.396621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.489 08:28:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.489 08:28:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:09.489 { 00:08:09.489 "version": "SPDK v25.01-pre git sha1 a6ed92877", 00:08:09.489 "fields": { 00:08:09.489 "major": 25, 00:08:09.489 "minor": 1, 00:08:09.489 "patch": 0, 00:08:09.489 "suffix": "-pre", 00:08:09.489 "commit": "a6ed92877" 00:08:09.489 } 00:08:09.489 } 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:09.489 08:28:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.489 08:28:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:09.489 08:28:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:09.489 08:28:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:09.490 08:28:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.749 request: 00:08:09.749 { 00:08:09.749 "method": "env_dpdk_get_mem_stats", 00:08:09.749 "req_id": 1 00:08:09.749 } 00:08:09.749 Got JSON-RPC error response 00:08:09.749 response: 00:08:09.749 { 00:08:09.749 "code": -32601, 00:08:09.749 "message": "Method not found" 00:08:09.749 } 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.749 08:28:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60697 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60697 ']' 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60697 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60697 00:08:09.749 killing process with pid 60697 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60697' 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 60697 00:08:09.749 08:28:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 60697 00:08:12.292 00:08:12.292 real 0m4.418s 00:08:12.292 user 0m4.582s 00:08:12.292 sys 0m0.676s 00:08:12.292 ************************************ 00:08:12.292 END TEST app_cmdline 00:08:12.292 ************************************ 00:08:12.292 08:28:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.292 08:28:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:12.292 08:28:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:12.292 08:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.292 08:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.292 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:08:12.292 ************************************ 00:08:12.292 START TEST version 00:08:12.292 ************************************ 00:08:12.292 08:28:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:12.292 * Looking for test storage... 00:08:12.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.552 08:28:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.552 08:28:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.552 08:28:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.552 08:28:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.552 08:28:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.552 08:28:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.552 08:28:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.552 08:28:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.552 08:28:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.552 08:28:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.552 08:28:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.552 08:28:47 version -- scripts/common.sh@344 -- # case "$op" in 00:08:12.552 08:28:47 version -- scripts/common.sh@345 -- # : 1 00:08:12.552 08:28:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.552 08:28:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.552 08:28:47 version -- scripts/common.sh@365 -- # decimal 1 00:08:12.552 08:28:47 version -- scripts/common.sh@353 -- # local d=1 00:08:12.552 08:28:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.552 08:28:47 version -- scripts/common.sh@355 -- # echo 1 00:08:12.552 08:28:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.552 08:28:47 version -- scripts/common.sh@366 -- # decimal 2 00:08:12.552 08:28:47 version -- scripts/common.sh@353 -- # local d=2 00:08:12.552 08:28:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.552 08:28:47 version -- scripts/common.sh@355 -- # echo 2 00:08:12.552 08:28:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.552 08:28:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.552 08:28:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.552 08:28:47 version -- scripts/common.sh@368 -- # return 0 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.552 --rc genhtml_branch_coverage=1 00:08:12.552 --rc genhtml_function_coverage=1 00:08:12.552 --rc genhtml_legend=1 00:08:12.552 --rc geninfo_all_blocks=1 00:08:12.552 --rc geninfo_unexecuted_blocks=1 00:08:12.552 00:08:12.552 ' 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.552 --rc genhtml_branch_coverage=1 00:08:12.552 --rc genhtml_function_coverage=1 00:08:12.552 --rc genhtml_legend=1 00:08:12.552 --rc geninfo_all_blocks=1 00:08:12.552 --rc geninfo_unexecuted_blocks=1 00:08:12.552 00:08:12.552 ' 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.552 --rc genhtml_branch_coverage=1 00:08:12.552 --rc genhtml_function_coverage=1 00:08:12.552 --rc genhtml_legend=1 00:08:12.552 --rc geninfo_all_blocks=1 00:08:12.552 --rc geninfo_unexecuted_blocks=1 00:08:12.552 00:08:12.552 ' 00:08:12.552 08:28:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.552 --rc genhtml_branch_coverage=1 00:08:12.552 --rc genhtml_function_coverage=1 00:08:12.552 --rc genhtml_legend=1 00:08:12.552 --rc geninfo_all_blocks=1 00:08:12.552 --rc geninfo_unexecuted_blocks=1 00:08:12.552 00:08:12.552 ' 00:08:12.552 08:28:47 version -- app/version.sh@17 -- # get_header_version major 00:08:12.552 08:28:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # cut -f2 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.552 08:28:47 version -- app/version.sh@17 -- # major=25 00:08:12.552 08:28:47 version -- app/version.sh@18 -- # get_header_version minor 00:08:12.552 08:28:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # cut -f2 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.552 08:28:47 version -- app/version.sh@18 -- # minor=1 00:08:12.552 08:28:47 version -- app/version.sh@19 -- # get_header_version patch 00:08:12.552 08:28:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # cut -f2 00:08:12.552 08:28:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.552 08:28:47 version -- app/version.sh@19 -- # patch=0 00:08:12.553 08:28:47 version -- app/version.sh@20 -- # get_header_version suffix 00:08:12.553 08:28:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.553 08:28:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.553 08:28:47 version -- app/version.sh@14 -- # cut -f2 00:08:12.553 08:28:47 version -- app/version.sh@20 -- # suffix=-pre 00:08:12.553 08:28:47 version -- app/version.sh@22 -- # version=25.1 00:08:12.553 08:28:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:12.553 08:28:47 version -- app/version.sh@28 -- # version=25.1rc0 00:08:12.553 08:28:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:12.553 08:28:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:12.553 08:28:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:12.553 08:28:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:12.553 00:08:12.553 real 0m0.317s 00:08:12.553 user 0m0.179s 00:08:12.553 sys 0m0.191s 00:08:12.553 ************************************ 00:08:12.553 END TEST version 00:08:12.553 ************************************ 00:08:12.553 08:28:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.553 08:28:47 version -- common/autotest_common.sh@10 -- # set +x 00:08:12.553 08:28:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:12.553 08:28:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:12.553 08:28:47 -- spdk/autotest.sh@194 -- # uname -s 00:08:12.553 08:28:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:12.553 08:28:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.553 08:28:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.553 08:28:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:12.553 08:28:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.553 08:28:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.553 08:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.553 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:08:12.813 ************************************ 00:08:12.813 START TEST blockdev_nvme 00:08:12.813 ************************************ 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.813 * Looking for test storage... 00:08:12.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.813 08:28:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.813 --rc genhtml_branch_coverage=1 00:08:12.813 --rc genhtml_function_coverage=1 00:08:12.813 --rc genhtml_legend=1 00:08:12.813 --rc geninfo_all_blocks=1 00:08:12.813 --rc geninfo_unexecuted_blocks=1 00:08:12.813 00:08:12.813 ' 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.813 --rc genhtml_branch_coverage=1 00:08:12.813 --rc genhtml_function_coverage=1 00:08:12.813 --rc genhtml_legend=1 00:08:12.813 --rc geninfo_all_blocks=1 00:08:12.813 --rc geninfo_unexecuted_blocks=1 00:08:12.813 00:08:12.813 ' 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.813 --rc genhtml_branch_coverage=1 00:08:12.813 --rc genhtml_function_coverage=1 00:08:12.813 --rc genhtml_legend=1 00:08:12.813 --rc geninfo_all_blocks=1 00:08:12.813 --rc geninfo_unexecuted_blocks=1 00:08:12.813 00:08:12.813 ' 00:08:12.813 08:28:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.813 --rc genhtml_branch_coverage=1 00:08:12.813 --rc genhtml_function_coverage=1 00:08:12.813 --rc genhtml_legend=1 00:08:12.813 --rc geninfo_all_blocks=1 00:08:12.813 --rc geninfo_unexecuted_blocks=1 00:08:12.813 00:08:12.813 ' 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:12.813 08:28:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:12.813 08:28:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60891 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:13.073 08:28:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60891 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60891 ']' 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.073 08:28:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.073 [2024-11-22 08:28:48.000049] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:13.073 [2024-11-22 08:28:48.000382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60891 ] 00:08:13.332 [2024-11-22 08:28:48.183239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.332 [2024-11-22 08:28:48.305078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.271 08:28:49 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.271 08:28:49 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.271 08:28:49 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:14.271 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.271 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.530 08:28:49 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.530 08:28:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:14.530 08:28:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.530 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.791 08:28:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:14.791 08:28:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:14.792 08:28:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d2df68da-60e3-4b3a-88da-c46cdf1ca803"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d2df68da-60e3-4b3a-88da-c46cdf1ca803",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fb1c5902-caa3-4cdc-af92-ba00e2192dd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fb1c5902-caa3-4cdc-af92-ba00e2192dd4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1b726503-3240-46b1-ad1a-e099e8cc6f0d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b726503-3240-46b1-ad1a-e099e8cc6f0d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "31f48f4d-362f-4eaf-b3f8-b8a01ea795ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31f48f4d-362f-4eaf-b3f8-b8a01ea795ff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "61a9386f-98e1-4005-9474-aa09352c9fd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61a9386f-98e1-4005-9474-aa09352c9fd9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c928749c-642e-4ac8-80d0-624f7c977b91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c928749c-642e-4ac8-80d0-624f7c977b91",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:14.792 08:28:49 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:14.792 08:28:49 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:14.792 08:28:49 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:14.792 08:28:49 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60891 00:08:14.792 08:28:49 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60891 ']' 00:08:14.792 08:28:49 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60891 00:08:14.792 08:28:49 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:14.792 08:28:49 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.792 08:28:49 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60891 00:08:15.051 killing process with pid 60891 00:08:15.051 08:28:49 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.051 08:28:49 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.051 08:28:49 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60891' 00:08:15.051 08:28:49 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60891 00:08:15.051 08:28:49 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60891 00:08:17.588 08:28:52 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:17.588 08:28:52 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:17.588 08:28:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:17.588 08:28:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.588 08:28:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.588 ************************************ 00:08:17.588 START TEST bdev_hello_world 00:08:17.588 ************************************ 00:08:17.588 08:28:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:17.588 [2024-11-22 08:28:52.347140] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:17.588 [2024-11-22 08:28:52.347503] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:08:17.588 [2024-11-22 08:28:52.529486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.588 [2024-11-22 08:28:52.645446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.527 [2024-11-22 08:28:53.290950] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:18.527 [2024-11-22 08:28:53.291022] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:18.527 [2024-11-22 08:28:53.291048] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:18.527 [2024-11-22 08:28:53.294153] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:18.527 [2024-11-22 08:28:53.294879] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:18.527 [2024-11-22 08:28:53.294917] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:18.527 [2024-11-22 08:28:53.295094] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:18.527 00:08:18.527 [2024-11-22 08:28:53.295122] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:19.465 00:08:19.465 real 0m2.144s 00:08:19.465 user 0m1.785s 00:08:19.465 sys 0m0.251s 00:08:19.465 ************************************ 00:08:19.465 END TEST bdev_hello_world 00:08:19.465 ************************************ 00:08:19.465 08:28:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.465 08:28:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:19.465 08:28:54 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:19.465 08:28:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.465 08:28:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.465 08:28:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.465 ************************************ 00:08:19.465 START TEST bdev_bounds 00:08:19.465 ************************************ 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61028 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:19.465 Process bdevio pid: 61028 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61028' 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61028 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61028 ']' 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.465 08:28:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 [2024-11-22 08:28:54.580273] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:19.724 [2024-11-22 08:28:54.580412] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:08:19.724 [2024-11-22 08:28:54.760896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.984 [2024-11-22 08:28:54.884064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.984 [2024-11-22 08:28:54.884213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.984 [2024-11-22 08:28:54.884244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.552 08:28:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.552 08:28:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:20.552 08:28:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:20.811 I/O targets: 00:08:20.811 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:20.811 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:20.811 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.811 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.811 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.811 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:20.811 00:08:20.811 00:08:20.811 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.811 http://cunit.sourceforge.net/ 00:08:20.811 00:08:20.811 00:08:20.811 Suite: bdevio tests on: Nvme3n1 00:08:20.811 Test: blockdev write read block ...passed 00:08:20.811 Test: blockdev write zeroes read block ...passed 00:08:20.811 Test: blockdev write zeroes read no split ...passed 00:08:20.811 Test: blockdev write zeroes read split ...passed 00:08:20.811 Test: blockdev write zeroes read split partial ...passed 00:08:20.811 Test: blockdev reset ...[2024-11-22 08:28:55.766232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:20.811 [2024-11-22 08:28:55.771324] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:20.811 Test: blockdev write read 8 blocks ...uccessful. 00:08:20.811 passed 00:08:20.811 Test: blockdev write read size > 128k ...passed 00:08:20.811 Test: blockdev write read invalid size ...passed 00:08:20.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.811 Test: blockdev write read max offset ...passed 00:08:20.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.811 Test: blockdev writev readv 8 blocks ...passed 00:08:20.811 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.811 Test: blockdev writev readv block ...passed 00:08:20.811 Test: blockdev writev readv size > 128k ...passed 00:08:20.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.811 Test: blockdev comparev and writev ...[2024-11-22 08:28:55.782268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae40a000 len:0x1000 00:08:20.811 [2024-11-22 08:28:55.782372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.811 passed 00:08:20.811 Test: blockdev nvme passthru rw ...passed 00:08:20.811 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.811 Test: blockdev nvme admin passthru ...[2024-11-22 08:28:55.783379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.811 [2024-11-22 08:28:55.783451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.811 passed 00:08:20.811 Test: blockdev copy ...passed 00:08:20.811 Suite: bdevio tests on: Nvme2n3 00:08:20.811 Test: blockdev write read block ...passed 00:08:20.811 Test: blockdev write zeroes read block ...passed 00:08:20.811 Test: blockdev write zeroes read no split ...passed 00:08:20.811 Test: blockdev write zeroes read split ...passed 00:08:20.811 Test: blockdev write zeroes read split partial ...passed 00:08:20.811 Test: blockdev reset ...[2024-11-22 08:28:55.864029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:20.811 passed 00:08:20.811 Test: blockdev write read 8 blocks ...[2024-11-22 08:28:55.869298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:20.811 passed 00:08:20.811 Test: blockdev write read size > 128k ...passed 00:08:20.811 Test: blockdev write read invalid size ...passed 00:08:20.811 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.811 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.811 Test: blockdev write read max offset ...passed 00:08:20.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.811 Test: blockdev writev readv 8 blocks ...passed 00:08:20.811 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.811 Test: blockdev writev readv block ...passed 00:08:20.811 Test: blockdev writev readv size > 128k ...passed 00:08:20.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.811 Test: blockdev comparev and writev ...[2024-11-22 08:28:55.878647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291e06000 len:0x1000 00:08:20.811 [2024-11-22 08:28:55.878743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.811 passed 00:08:20.811 Test: blockdev nvme passthru rw ...passed 00:08:20.811 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.811 Test: blockdev nvme admin passthru ...[2024-11-22 08:28:55.879702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.812 [2024-11-22 08:28:55.879745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.812 passed 00:08:20.812 Test: blockdev copy ...passed 00:08:20.812 Suite: bdevio tests on: Nvme2n2 00:08:20.812 Test: blockdev write read block ...passed 00:08:20.812 Test: blockdev write zeroes read block ...passed 00:08:21.070 Test: blockdev write zeroes read no split ...passed 00:08:21.070 Test: blockdev write zeroes read split ...passed 00:08:21.070 Test: blockdev write zeroes read split partial ...passed 00:08:21.070 Test: blockdev reset ...[2024-11-22 08:28:55.959362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:21.070 passed 00:08:21.070 Test: blockdev write read 8 blocks ...[2024-11-22 08:28:55.964546] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:21.070 passed 00:08:21.070 Test: blockdev write read size > 128k ...passed 00:08:21.070 Test: blockdev write read invalid size ...passed 00:08:21.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.070 Test: blockdev write read max offset ...passed 00:08:21.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.070 Test: blockdev writev readv 8 blocks ...passed 00:08:21.070 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.070 Test: blockdev writev readv block ...passed 00:08:21.070 Test: blockdev writev readv size > 128k ...passed 00:08:21.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.070 Test: blockdev comparev and writev ...[2024-11-22 08:28:55.973910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c3c000 len:0x1000 00:08:21.070 [2024-11-22 08:28:55.974022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.070 passed 00:08:21.070 Test: blockdev nvme passthru rw ...passed 00:08:21.070 Test: blockdev nvme passthru vendor specific ...passed 00:08:21.071 Test: blockdev nvme admin passthru ...[2024-11-22 08:28:55.975012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.071 [2024-11-22 08:28:55.975057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.071 passed 00:08:21.071 Test: blockdev copy ...passed 00:08:21.071 Suite: bdevio tests on: Nvme2n1 00:08:21.071 Test: blockdev write read block ...passed 00:08:21.071 Test: blockdev write zeroes read block ...passed 00:08:21.071 Test: blockdev write zeroes read no split ...passed 00:08:21.071 Test: blockdev write zeroes read split ...passed 00:08:21.071 Test: blockdev write zeroes read split partial ...passed 00:08:21.071 Test: blockdev reset ...[2024-11-22 08:28:56.057556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:21.071 [2024-11-22 08:28:56.062793] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:21.071 Test: blockdev write read 8 blocks ...uccessful. 00:08:21.071 passed 00:08:21.071 Test: blockdev write read size > 128k ...passed 00:08:21.071 Test: blockdev write read invalid size ...passed 00:08:21.071 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.071 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.071 Test: blockdev write read max offset ...passed 00:08:21.071 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.071 Test: blockdev writev readv 8 blocks ...passed 00:08:21.071 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.071 Test: blockdev writev readv block ...passed 00:08:21.071 Test: blockdev writev readv size > 128k ...passed 00:08:21.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.071 Test: blockdev comparev and writev ...[2024-11-22 08:28:56.073425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c38000 len:0x1000 00:08:21.071 [2024-11-22 08:28:56.073803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.071 passed 00:08:21.071 Test: blockdev nvme passthru rw ...passed 00:08:21.071 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:28:56.075207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.071 [2024-11-22 08:28:56.075381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.071 passed 00:08:21.071 Test: blockdev nvme admin passthru ...passed 00:08:21.071 Test: blockdev copy ...passed 00:08:21.071 Suite: bdevio tests on: Nvme1n1 00:08:21.071 Test: blockdev write read block ...passed 00:08:21.071 Test: blockdev write zeroes read block ...passed 00:08:21.071 Test: blockdev write zeroes read no split ...passed 00:08:21.071 Test: blockdev write zeroes read split ...passed 00:08:21.329 Test: blockdev write zeroes read split partial ...passed 00:08:21.329 Test: blockdev reset ...[2024-11-22 08:28:56.155009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:21.329 passed 00:08:21.329 Test: blockdev write read 8 blocks ...[2024-11-22 08:28:56.159988] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:21.329 passed 00:08:21.329 Test: blockdev write read size > 128k ...passed 00:08:21.329 Test: blockdev write read invalid size ...passed 00:08:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.329 Test: blockdev write read max offset ...passed 00:08:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.329 Test: blockdev writev readv 8 blocks ...passed 00:08:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.329 Test: blockdev writev readv block ...passed 00:08:21.329 Test: blockdev writev readv size > 128k ...passed 00:08:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.329 Test: blockdev comparev and writev ...[2024-11-22 08:28:56.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c34000 len:0x1000 00:08:21.330 [2024-11-22 08:28:56.170012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.330 passed 00:08:21.330 Test: blockdev nvme passthru rw ...passed 00:08:21.330 Test: blockdev nvme passthru vendor specific ...passed 00:08:21.330 Test: blockdev nvme admin passthru ...[2024-11-22 08:28:56.171100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.330 [2024-11-22 08:28:56.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.330 passed 00:08:21.330 Test: blockdev copy ...passed 00:08:21.330 Suite: bdevio tests on: Nvme0n1 00:08:21.330 Test: blockdev write read block ...passed 00:08:21.330 Test: blockdev write zeroes read block ...passed 00:08:21.330 Test: blockdev write zeroes read no split ...passed 00:08:21.330 Test: blockdev write zeroes read split ...passed 00:08:21.330 Test: blockdev write zeroes read split partial ...passed 00:08:21.330 Test: blockdev reset ...[2024-11-22 08:28:56.255595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:21.330 [2024-11-22 08:28:56.260375] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:21.330 passed 00:08:21.330 Test: blockdev write read 8 blocks ...passed 00:08:21.330 Test: blockdev write read size > 128k ...passed 00:08:21.330 Test: blockdev write read invalid size ...passed 00:08:21.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.330 Test: blockdev write read max offset ...passed 00:08:21.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.330 Test: blockdev writev readv 8 blocks ...passed 00:08:21.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.330 Test: blockdev writev readv block ...passed 00:08:21.330 Test: blockdev writev readv size > 128k ...passed 00:08:21.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.330 Test: blockdev comparev and writev ...[2024-11-22 08:28:56.270484] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:21.330 separate metadata which is not supported yet. 00:08:21.330 passed 00:08:21.330 Test: blockdev nvme passthru rw ...passed 00:08:21.330 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:28:56.271350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:21.330 [2024-11-22 08:28:56.271641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:21.330 passed 00:08:21.330 Test: blockdev nvme admin passthru ...passed 00:08:21.330 Test: blockdev copy ...passed 00:08:21.330 00:08:21.330 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.330 suites 6 6 n/a 0 0 00:08:21.330 tests 138 138 138 0 0 00:08:21.330 asserts 893 893 893 0 n/a 00:08:21.330 00:08:21.330 Elapsed time = 1.585 seconds 00:08:21.330 0 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61028 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61028 ']' 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61028 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61028 00:08:21.330 killing process with pid 61028 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61028' 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61028 00:08:21.330 08:28:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61028 00:08:22.705 ************************************ 00:08:22.705 END TEST bdev_bounds 00:08:22.705 ************************************ 00:08:22.705 08:28:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:22.705 00:08:22.705 real 0m2.874s 00:08:22.705 user 0m7.315s 00:08:22.705 sys 0m0.411s 00:08:22.706 08:28:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.706 08:28:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:22.706 08:28:57 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.706 08:28:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.706 08:28:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.706 08:28:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.706 ************************************ 00:08:22.706 START TEST bdev_nbd 00:08:22.706 ************************************ 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61093 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61093 /var/tmp/spdk-nbd.sock 00:08:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61093 ']' 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.706 08:28:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:22.706 [2024-11-22 08:28:57.528673] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:22.706 [2024-11-22 08:28:57.529108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.706 [2024-11-22 08:28:57.710557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.964 [2024-11-22 08:28:57.825415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.531 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.789 1+0 records in 00:08:23.789 1+0 records out 00:08:23.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771162 s, 5.3 MB/s 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.789 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:23.790 08:28:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:23.790 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.790 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.790 08:28:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.048 1+0 records in 00:08:24.048 1+0 records out 00:08:24.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686452 s, 6.0 MB/s 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.048 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.307 1+0 records in 00:08:24.307 1+0 records out 00:08:24.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043069 s, 9.5 MB/s 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.307 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.566 1+0 records in 00:08:24.566 1+0 records out 00:08:24.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598852 s, 6.8 MB/s 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.566 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.826 1+0 records in 00:08:24.826 1+0 records out 00:08:24.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715545 s, 5.7 MB/s 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.826 08:28:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.085 1+0 records in 00:08:25.085 1+0 records out 00:08:25.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709807 s, 5.8 MB/s 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:25.085 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.344 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd0", 00:08:25.344 "bdev_name": "Nvme0n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd1", 00:08:25.344 "bdev_name": "Nvme1n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd2", 00:08:25.344 "bdev_name": "Nvme2n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd3", 00:08:25.344 "bdev_name": "Nvme2n2" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd4", 00:08:25.344 "bdev_name": "Nvme2n3" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd5", 00:08:25.344 "bdev_name": "Nvme3n1" 00:08:25.344 } 00:08:25.344 ]' 00:08:25.344 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:25.344 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:25.344 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd0", 00:08:25.344 "bdev_name": "Nvme0n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd1", 00:08:25.344 "bdev_name": "Nvme1n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd2", 00:08:25.344 "bdev_name": "Nvme2n1" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd3", 00:08:25.344 "bdev_name": "Nvme2n2" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd4", 00:08:25.344 "bdev_name": "Nvme2n3" 00:08:25.344 }, 00:08:25.344 { 00:08:25.344 "nbd_device": "/dev/nbd5", 00:08:25.344 "bdev_name": "Nvme3n1" 00:08:25.344 } 00:08:25.344 ]' 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.604 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.862 08:29:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.121 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.380 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.639 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.914 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.914 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.914 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.915 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.217 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:27.217 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:27.217 08:29:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.217 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:27.218 /dev/nbd0 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.218 1+0 records in 00:08:27.218 1+0 records out 00:08:27.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660967 s, 6.2 MB/s 00:08:27.218 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:27.477 /dev/nbd1 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.477 1+0 records in 00:08:27.477 1+0 records out 00:08:27.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589172 s, 7.0 MB/s 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.477 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:27.736 /dev/nbd10 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:27.736 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.737 1+0 records in 00:08:27.737 1+0 records out 00:08:27.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618931 s, 6.6 MB/s 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.737 08:29:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:27.996 /dev/nbd11 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.996 1+0 records in 00:08:27.996 1+0 records out 00:08:27.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685231 s, 6.0 MB/s 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.996 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:28.255 /dev/nbd12 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.255 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.255 1+0 records in 00:08:28.255 1+0 records out 00:08:28.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998472 s, 4.1 MB/s 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.256 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:28.516 /dev/nbd13 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.516 1+0 records in 00:08:28.516 1+0 records out 00:08:28.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691327 s, 5.9 MB/s 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.516 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd0", 00:08:28.775 "bdev_name": "Nvme0n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd1", 00:08:28.775 "bdev_name": "Nvme1n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd10", 00:08:28.775 "bdev_name": "Nvme2n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd11", 00:08:28.775 "bdev_name": "Nvme2n2" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd12", 00:08:28.775 "bdev_name": "Nvme2n3" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd13", 00:08:28.775 "bdev_name": "Nvme3n1" 00:08:28.775 } 00:08:28.775 ]' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd0", 00:08:28.775 "bdev_name": "Nvme0n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd1", 00:08:28.775 "bdev_name": "Nvme1n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd10", 00:08:28.775 "bdev_name": "Nvme2n1" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd11", 00:08:28.775 "bdev_name": "Nvme2n2" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd12", 00:08:28.775 "bdev_name": "Nvme2n3" 00:08:28.775 }, 00:08:28.775 { 00:08:28.775 "nbd_device": "/dev/nbd13", 00:08:28.775 "bdev_name": "Nvme3n1" 00:08:28.775 } 00:08:28.775 ]' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.775 /dev/nbd1 00:08:28.775 /dev/nbd10 00:08:28.775 /dev/nbd11 00:08:28.775 /dev/nbd12 00:08:28.775 /dev/nbd13' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.775 /dev/nbd1 00:08:28.775 /dev/nbd10 00:08:28.775 /dev/nbd11 00:08:28.775 /dev/nbd12 00:08:28.775 /dev/nbd13' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:28.775 256+0 records in 00:08:28.775 256+0 records out 00:08:28.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00950166 s, 110 MB/s 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.775 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.035 256+0 records in 00:08:29.035 256+0 records out 00:08:29.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126992 s, 8.3 MB/s 00:08:29.035 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.035 08:29:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.035 256+0 records in 00:08:29.035 256+0 records out 00:08:29.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132731 s, 7.9 MB/s 00:08:29.035 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.035 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.294 256+0 records in 00:08:29.294 256+0 records out 00:08:29.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131911 s, 7.9 MB/s 00:08:29.294 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.294 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.554 256+0 records in 00:08:29.554 256+0 records out 00:08:29.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128794 s, 8.1 MB/s 00:08:29.554 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.554 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:29.554 256+0 records in 00:08:29.554 256+0 records out 00:08:29.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129801 s, 8.1 MB/s 00:08:29.554 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.554 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:29.813 256+0 records in 00:08:29.813 256+0 records out 00:08:29.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132711 s, 7.9 MB/s 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.813 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.072 08:29:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.073 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.331 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.331 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.332 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.591 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.850 08:29:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.109 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:31.368 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.627 malloc_lvol_verify 00:08:31.627 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.886 d041e04c-c700-4445-8b44-ca3956005d7d 00:08:31.886 08:29:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:32.145 8dcc394f-bddc-419e-82c5-e87a5d82fe91 00:08:32.145 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.404 /dev/nbd0 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:32.404 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.404 Discarding device blocks: 0/4096 done 00:08:32.404 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.404 00:08:32.404 Allocating group tables: 0/1 done 00:08:32.404 Writing inode tables: 0/1 done 00:08:32.404 Creating journal (1024 blocks): done 00:08:32.404 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.404 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.404 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61093 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61093 ']' 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61093 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61093 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.664 killing process with pid 61093 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61093' 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61093 00:08:32.664 08:29:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61093 00:08:34.045 08:29:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:34.045 00:08:34.046 real 0m11.469s 00:08:34.046 user 0m14.896s 00:08:34.046 sys 0m4.651s 00:08:34.046 08:29:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.046 08:29:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:34.046 ************************************ 00:08:34.046 END TEST bdev_nbd 00:08:34.046 ************************************ 00:08:34.046 08:29:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:34.046 08:29:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:34.046 skipping fio tests on NVMe due to multi-ns failures. 00:08:34.046 08:29:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:34.046 08:29:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:34.046 08:29:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.046 08:29:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:34.046 08:29:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.046 08:29:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.046 ************************************ 00:08:34.046 START TEST bdev_verify 00:08:34.046 ************************************ 00:08:34.046 08:29:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.046 [2024-11-22 08:29:09.052556] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:34.046 [2024-11-22 08:29:09.052690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:08:34.314 [2024-11-22 08:29:09.234995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.314 [2024-11-22 08:29:09.381670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.314 [2024-11-22 08:29:09.381757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.251 Running I/O for 5 seconds... 00:08:37.572 17024.00 IOPS, 66.50 MiB/s [2024-11-22T08:29:13.597Z] 17696.00 IOPS, 69.12 MiB/s [2024-11-22T08:29:14.536Z] 17834.67 IOPS, 69.67 MiB/s [2024-11-22T08:29:15.472Z] 17936.00 IOPS, 70.06 MiB/s [2024-11-22T08:29:15.472Z] 18368.00 IOPS, 71.75 MiB/s 00:08:40.385 Latency(us) 00:08:40.385 [2024-11-22T08:29:15.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.385 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0xbd0bd 00:08:40.385 Nvme0n1 : 5.07 1680.10 6.56 0.00 0.00 75859.02 10843.71 74537.33 00:08:40.385 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:40.385 Nvme0n1 : 5.06 1352.39 5.28 0.00 0.00 94146.54 12212.33 95171.96 00:08:40.385 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0xa0000 00:08:40.385 Nvme1n1 : 5.07 1679.65 6.56 0.00 0.00 75779.00 9106.61 70326.18 00:08:40.385 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0xa0000 length 0xa0000 00:08:40.385 Nvme1n1 : 5.08 1361.13 5.32 0.00 0.00 93629.28 10422.59 90118.58 00:08:40.385 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0x80000 00:08:40.385 Nvme2n1 : 5.08 1687.72 6.59 0.00 0.00 75424.99 9159.25 60640.54 00:08:40.385 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x80000 length 0x80000 00:08:40.385 Nvme2n1 : 5.08 1360.73 5.32 0.00 0.00 93492.93 10580.51 88013.01 00:08:40.385 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0x80000 00:08:40.385 Nvme2n2 : 5.08 1687.32 6.59 0.00 0.00 75286.09 8948.69 55166.05 00:08:40.385 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x80000 length 0x80000 00:08:40.385 Nvme2n2 : 5.08 1360.04 5.31 0.00 0.00 93435.34 12528.17 93066.38 00:08:40.385 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0x80000 00:08:40.385 Nvme2n3 : 5.09 1686.35 6.59 0.00 0.00 75198.77 10738.43 55166.05 00:08:40.385 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x80000 length 0x80000 00:08:40.385 Nvme2n3 : 5.08 1359.62 5.31 0.00 0.00 93315.73 13265.12 95593.07 00:08:40.385 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x0 length 0x20000 00:08:40.385 Nvme3n1 : 5.09 1685.95 6.59 0.00 0.00 75092.35 10738.43 56850.51 00:08:40.385 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.385 Verification LBA range: start 0x20000 length 0x20000 00:08:40.385 Nvme3n1 : 5.08 1359.31 5.31 0.00 0.00 93167.02 13159.84 96014.19 00:08:40.385 [2024-11-22T08:29:15.472Z] =================================================================================================================== 00:08:40.385 [2024-11-22T08:29:15.472Z] Total : 18260.30 71.33 0.00 0.00 83517.03 8948.69 96014.19 00:08:41.762 00:08:41.762 real 0m7.864s 00:08:41.762 user 0m14.419s 00:08:41.762 sys 0m0.390s 00:08:41.762 ************************************ 00:08:41.762 END TEST bdev_verify 00:08:41.762 ************************************ 00:08:41.762 08:29:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.762 08:29:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 08:29:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:42.021 08:29:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:42.021 08:29:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.021 08:29:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.021 ************************************ 00:08:42.021 START TEST bdev_verify_big_io 00:08:42.021 ************************************ 00:08:42.021 08:29:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:42.021 [2024-11-22 08:29:16.987185] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:42.021 [2024-11-22 08:29:16.987338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61586 ] 00:08:42.279 [2024-11-22 08:29:17.176249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.279 [2024-11-22 08:29:17.324742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.279 [2024-11-22 08:29:17.324776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.214 Running I/O for 5 seconds... 00:08:47.029 1537.00 IOPS, 96.06 MiB/s [2024-11-22T08:29:24.024Z] 1931.00 IOPS, 120.69 MiB/s [2024-11-22T08:29:24.593Z] 2874.00 IOPS, 179.62 MiB/s 00:08:49.506 Latency(us) 00:08:49.506 [2024-11-22T08:29:24.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.506 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0xbd0b 00:08:49.506 Nvme0n1 : 5.52 231.93 14.50 0.00 0.00 544139.95 27372.47 559240.53 00:08:49.506 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:49.506 Nvme0n1 : 5.58 113.94 7.12 0.00 0.00 1086524.18 17581.55 1266713.50 00:08:49.506 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0xa000 00:08:49.506 Nvme1n1 : 5.52 228.07 14.25 0.00 0.00 540667.12 25688.01 508706.75 00:08:49.506 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0xa000 length 0xa000 00:08:49.506 Nvme1n1 : 5.58 114.61 7.16 0.00 0.00 1021767.10 39163.68 976986.47 00:08:49.506 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0x8000 00:08:49.506 Nvme2n1 : 5.52 228.86 14.30 0.00 0.00 530507.44 27372.47 518813.51 00:08:49.506 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x8000 length 0x8000 00:08:49.506 Nvme2n1 : 5.72 127.52 7.97 0.00 0.00 888104.95 34741.98 976986.47 00:08:49.506 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0x8000 00:08:49.506 Nvme2n2 : 5.53 231.67 14.48 0.00 0.00 517383.11 50323.23 522182.43 00:08:49.506 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x8000 length 0x8000 00:08:49.506 Nvme2n2 : 5.79 140.41 8.78 0.00 0.00 780250.53 23898.27 1751837.82 00:08:49.506 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0x8000 00:08:49.506 Nvme2n3 : 5.53 231.58 14.47 0.00 0.00 508579.68 48007.09 522182.43 00:08:49.506 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x8000 length 0x8000 00:08:49.506 Nvme2n3 : 5.98 195.71 12.23 0.00 0.00 538122.16 9633.00 1462110.79 00:08:49.506 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x0 length 0x2000 00:08:49.506 Nvme3n1 : 5.54 242.77 15.17 0.00 0.00 479180.73 4553.30 572716.21 00:08:49.506 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.506 Verification LBA range: start 0x2000 length 0x2000 00:08:49.506 Nvme3n1 : 6.14 283.64 17.73 0.00 0.00 364674.81 615.22 2075254.03 00:08:49.506 [2024-11-22T08:29:24.593Z] =================================================================================================================== 00:08:49.506 [2024-11-22T08:29:24.593Z] Total : 2370.71 148.17 0.00 0.00 587693.21 615.22 2075254.03 00:08:51.409 00:08:51.409 real 0m9.445s 00:08:51.409 user 0m17.515s 00:08:51.409 sys 0m0.454s 00:08:51.409 08:29:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.409 08:29:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:51.409 ************************************ 00:08:51.409 END TEST bdev_verify_big_io 00:08:51.409 ************************************ 00:08:51.409 08:29:26 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.409 08:29:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:51.409 08:29:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.409 08:29:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.409 ************************************ 00:08:51.409 START TEST bdev_write_zeroes 00:08:51.409 ************************************ 00:08:51.409 08:29:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.668 [2024-11-22 08:29:26.494409] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:51.668 [2024-11-22 08:29:26.494555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:08:51.668 [2024-11-22 08:29:26.667183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.928 [2024-11-22 08:29:26.783675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.494 Running I/O for 1 seconds... 00:08:53.428 82888.00 IOPS, 323.78 MiB/s 00:08:53.428 Latency(us) 00:08:53.428 [2024-11-22T08:29:28.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.428 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme0n1 : 1.02 13715.40 53.58 0.00 0.00 9315.66 8159.10 22950.76 00:08:53.428 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme1n1 : 1.02 13757.66 53.74 0.00 0.00 9276.07 8474.94 17581.55 00:08:53.428 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme2n1 : 1.02 13745.18 53.69 0.00 0.00 9270.83 8264.38 17265.71 00:08:53.428 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme2n2 : 1.02 13731.62 53.64 0.00 0.00 9267.41 8211.74 16739.32 00:08:53.428 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme2n3 : 1.02 13718.43 53.59 0.00 0.00 9252.61 8211.74 16634.04 00:08:53.428 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.428 Nvme3n1 : 1.02 13705.84 53.54 0.00 0.00 9242.14 7106.31 18107.94 00:08:53.428 [2024-11-22T08:29:28.515Z] =================================================================================================================== 00:08:53.428 [2024-11-22T08:29:28.515Z] Total : 82374.13 321.77 0.00 0.00 9270.76 7106.31 22950.76 00:08:54.824 00:08:54.824 real 0m3.233s 00:08:54.824 user 0m2.845s 00:08:54.824 sys 0m0.276s 00:08:54.824 08:29:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.824 08:29:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 ************************************ 00:08:54.824 END TEST bdev_write_zeroes 00:08:54.824 ************************************ 00:08:54.824 08:29:29 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.824 08:29:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:54.824 08:29:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.824 08:29:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 ************************************ 00:08:54.824 START TEST bdev_json_nonenclosed 00:08:54.824 ************************************ 00:08:54.824 08:29:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.824 [2024-11-22 08:29:29.794331] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:54.824 [2024-11-22 08:29:29.794488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:08:55.083 [2024-11-22 08:29:29.976117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.083 [2024-11-22 08:29:30.090531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.083 [2024-11-22 08:29:30.090626] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:55.083 [2024-11-22 08:29:30.090648] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:55.083 [2024-11-22 08:29:30.090660] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.343 00:08:55.343 real 0m0.630s 00:08:55.343 user 0m0.392s 00:08:55.343 sys 0m0.135s 00:08:55.343 08:29:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.343 ************************************ 00:08:55.343 END TEST bdev_json_nonenclosed 00:08:55.343 ************************************ 00:08:55.343 08:29:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 08:29:30 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.343 08:29:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:55.343 08:29:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.343 08:29:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.343 ************************************ 00:08:55.343 START TEST bdev_json_nonarray 00:08:55.343 ************************************ 00:08:55.343 08:29:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.603 [2024-11-22 08:29:30.492962] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:55.603 [2024-11-22 08:29:30.493107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:08:55.603 [2024-11-22 08:29:30.672402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.862 [2024-11-22 08:29:30.777937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.862 [2024-11-22 08:29:30.778067] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:55.862 [2024-11-22 08:29:30.778090] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:55.862 [2024-11-22 08:29:30.778103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.122 00:08:56.122 real 0m0.625s 00:08:56.122 user 0m0.381s 00:08:56.122 sys 0m0.139s 00:08:56.122 08:29:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.122 08:29:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 ************************************ 00:08:56.122 END TEST bdev_json_nonarray 00:08:56.122 ************************************ 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:56.122 08:29:31 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:56.122 00:08:56.122 real 0m43.459s 00:08:56.122 user 1m4.296s 00:08:56.122 sys 0m7.898s 00:08:56.122 08:29:31 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.122 08:29:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 ************************************ 00:08:56.122 END TEST blockdev_nvme 00:08:56.122 ************************************ 00:08:56.122 08:29:31 -- spdk/autotest.sh@209 -- # uname -s 00:08:56.122 08:29:31 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:56.122 08:29:31 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:56.122 08:29:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.122 08:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.122 08:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 ************************************ 00:08:56.122 START TEST blockdev_nvme_gpt 00:08:56.122 ************************************ 00:08:56.122 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:56.382 * Looking for test storage... 00:08:56.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.382 08:29:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.382 --rc genhtml_branch_coverage=1 00:08:56.382 --rc genhtml_function_coverage=1 00:08:56.382 --rc genhtml_legend=1 00:08:56.382 --rc geninfo_all_blocks=1 00:08:56.382 --rc geninfo_unexecuted_blocks=1 00:08:56.382 00:08:56.382 ' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.382 --rc genhtml_branch_coverage=1 00:08:56.382 --rc genhtml_function_coverage=1 00:08:56.382 --rc genhtml_legend=1 00:08:56.382 --rc geninfo_all_blocks=1 00:08:56.382 --rc geninfo_unexecuted_blocks=1 00:08:56.382 00:08:56.382 ' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.382 --rc genhtml_branch_coverage=1 00:08:56.382 --rc genhtml_function_coverage=1 00:08:56.382 --rc genhtml_legend=1 00:08:56.382 --rc geninfo_all_blocks=1 00:08:56.382 --rc geninfo_unexecuted_blocks=1 00:08:56.382 00:08:56.382 ' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.382 --rc genhtml_branch_coverage=1 00:08:56.382 --rc genhtml_function_coverage=1 00:08:56.382 --rc genhtml_legend=1 00:08:56.382 --rc geninfo_all_blocks=1 00:08:56.382 --rc geninfo_unexecuted_blocks=1 00:08:56.382 00:08:56.382 ' 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:56.382 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61869 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61869 00:08:56.383 08:29:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61869 ']' 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.383 08:29:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:56.642 [2024-11-22 08:29:31.534207] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:08:56.642 [2024-11-22 08:29:31.534759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61869 ] 00:08:56.642 [2024-11-22 08:29:31.718976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.902 [2024-11-22 08:29:31.826159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.840 08:29:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.840 08:29:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:57.840 08:29:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:57.840 08:29:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:57.840 08:29:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:58.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.532 Waiting for block devices as requested 00:08:58.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.792 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.792 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.052 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.328 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:04.328 BYT; 00:09:04.328 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:04.328 BYT; 00:09:04.328 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:04.328 08:29:39 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:04.328 08:29:39 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:05.264 The operation has completed successfully. 00:09:05.264 08:29:40 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:06.204 The operation has completed successfully. 00:09:06.204 08:29:41 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:06.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.713 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.713 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:07.974 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.974 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.974 [] 00:09:07.974 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:07.974 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:07.974 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.974 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:08.233 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:08.234 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.234 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.494 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.494 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:08.494 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:08.495 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a0ccff43-424b-4113-a5ca-c07f654096b6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a0ccff43-424b-4113-a5ca-c07f654096b6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "416dab98-a486-497c-adc3-f9eeaf444185"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "416dab98-a486-497c-adc3-f9eeaf444185",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8bfe5290-0384-443d-aaba-decf2ee486a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8bfe5290-0384-443d-aaba-decf2ee486a9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2785502d-35ff-45c8-afa8-19d28fb3c9fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2785502d-35ff-45c8-afa8-19d28fb3c9fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9713d5f3-f14d-44be-8500-e9df5104e3bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9713d5f3-f14d-44be-8500-e9df5104e3bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:08.495 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:08.495 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:08.495 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:08.495 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61869 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61869 ']' 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61869 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61869 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.495 killing process with pid 61869 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61869' 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61869 00:09:08.495 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61869 00:09:11.036 08:29:45 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:11.036 08:29:45 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:11.036 08:29:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:11.036 08:29:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.036 08:29:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 ************************************ 00:09:11.036 START TEST bdev_hello_world 00:09:11.036 ************************************ 00:09:11.036 08:29:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:11.036 [2024-11-22 08:29:45.976712] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:11.036 [2024-11-22 08:29:45.976845] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62517 ] 00:09:11.295 [2024-11-22 08:29:46.153288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.295 [2024-11-22 08:29:46.261900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.864 [2024-11-22 08:29:46.904761] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:11.864 [2024-11-22 08:29:46.904814] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:11.864 [2024-11-22 08:29:46.904856] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:11.864 [2024-11-22 08:29:46.907809] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:11.864 [2024-11-22 08:29:46.908589] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:11.864 [2024-11-22 08:29:46.908626] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:11.864 [2024-11-22 08:29:46.908875] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:11.864 00:09:11.864 [2024-11-22 08:29:46.908903] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:13.241 00:09:13.241 real 0m2.109s 00:09:13.241 user 0m1.767s 00:09:13.241 sys 0m0.234s 00:09:13.241 08:29:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.241 08:29:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 ************************************ 00:09:13.241 END TEST bdev_hello_world 00:09:13.241 ************************************ 00:09:13.241 08:29:48 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:13.241 08:29:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.241 08:29:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.241 08:29:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 ************************************ 00:09:13.241 START TEST bdev_bounds 00:09:13.241 ************************************ 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62559 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62559' 00:09:13.241 Process bdevio pid: 62559 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62559 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62559 ']' 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.241 08:29:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 [2024-11-22 08:29:48.160794] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:13.241 [2024-11-22 08:29:48.160925] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:09:13.500 [2024-11-22 08:29:48.340920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.500 [2024-11-22 08:29:48.452577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.500 [2024-11-22 08:29:48.453232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.500 [2024-11-22 08:29:48.453259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.069 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.069 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:14.069 08:29:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:14.329 I/O targets: 00:09:14.329 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:14.329 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:14.329 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:14.329 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:14.329 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:14.329 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:14.329 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:14.329 00:09:14.329 00:09:14.329 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.329 http://cunit.sourceforge.net/ 00:09:14.329 00:09:14.329 00:09:14.329 Suite: bdevio tests on: Nvme3n1 00:09:14.329 Test: blockdev write read block ...passed 00:09:14.329 Test: blockdev write zeroes read block ...passed 00:09:14.329 Test: blockdev write zeroes read no split ...passed 00:09:14.329 Test: blockdev write zeroes read split ...passed 00:09:14.329 Test: blockdev write zeroes read split partial ...passed 00:09:14.329 Test: blockdev reset ...[2024-11-22 08:29:49.294903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:14.329 [2024-11-22 08:29:49.301423] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:14.329 passed 00:09:14.329 Test: blockdev write read 8 blocks ...passed 00:09:14.329 Test: blockdev write read size > 128k ...passed 00:09:14.329 Test: blockdev write read invalid size ...passed 00:09:14.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.329 Test: blockdev write read max offset ...passed 00:09:14.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.329 Test: blockdev writev readv 8 blocks ...passed 00:09:14.330 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.330 Test: blockdev writev readv block ...passed 00:09:14.330 Test: blockdev writev readv size > 128k ...passed 00:09:14.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.330 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.311271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac404000 len:0x1000 00:09:14.330 [2024-11-22 08:29:49.311366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.330 passed 00:09:14.330 Test: blockdev nvme passthru rw ...passed 00:09:14.330 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:29:49.312515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:14.330 [2024-11-22 08:29:49.312570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:14.330 passed 00:09:14.330 Test: blockdev nvme admin passthru ...passed 00:09:14.330 Test: blockdev copy ...passed 00:09:14.330 Suite: bdevio tests on: Nvme2n3 00:09:14.330 Test: blockdev write read block ...passed 00:09:14.330 Test: blockdev write zeroes read block ...passed 00:09:14.330 Test: blockdev write zeroes read no split ...passed 00:09:14.330 Test: blockdev write zeroes read split ...passed 00:09:14.330 Test: blockdev write zeroes read split partial ...passed 00:09:14.330 Test: blockdev reset ...[2024-11-22 08:29:49.386147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:14.330 [2024-11-22 08:29:49.391293] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:14.330 passed 00:09:14.330 Test: blockdev write read 8 blocks ...passed 00:09:14.330 Test: blockdev write read size > 128k ...passed 00:09:14.330 Test: blockdev write read invalid size ...passed 00:09:14.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.330 Test: blockdev write read max offset ...passed 00:09:14.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.330 Test: blockdev writev readv 8 blocks ...passed 00:09:14.330 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.330 Test: blockdev writev readv block ...passed 00:09:14.330 Test: blockdev writev readv size > 128k ...passed 00:09:14.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.330 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.400508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac402000 len:0x1000 00:09:14.330 [2024-11-22 08:29:49.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.330 passed 00:09:14.330 Test: blockdev nvme passthru rw ...passed 00:09:14.330 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:29:49.401607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:14.330 [2024-11-22 08:29:49.401650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:14.330 passed 00:09:14.330 Test: blockdev nvme admin passthru ...passed 00:09:14.330 Test: blockdev copy ...passed 00:09:14.330 Suite: bdevio tests on: Nvme2n2 00:09:14.330 Test: blockdev write read block ...passed 00:09:14.330 Test: blockdev write zeroes read block ...passed 00:09:14.590 Test: blockdev write zeroes read no split ...passed 00:09:14.590 Test: blockdev write zeroes read split ...passed 00:09:14.590 Test: blockdev write zeroes read split partial ...passed 00:09:14.590 Test: blockdev reset ...[2024-11-22 08:29:49.477459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:14.590 [2024-11-22 08:29:49.485569] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:14.590 passed 00:09:14.590 Test: blockdev write read 8 blocks ...passed 00:09:14.590 Test: blockdev write read size > 128k ...passed 00:09:14.590 Test: blockdev write read invalid size ...passed 00:09:14.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.590 Test: blockdev write read max offset ...passed 00:09:14.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.590 Test: blockdev writev readv 8 blocks ...passed 00:09:14.590 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.590 Test: blockdev writev readv block ...passed 00:09:14.590 Test: blockdev writev readv size > 128k ...passed 00:09:14.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.590 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.494892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf238000 len:0x1000 00:09:14.590 [2024-11-22 08:29:49.494980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.590 passed 00:09:14.590 Test: blockdev nvme passthru rw ...passed 00:09:14.590 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:29:49.496113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:14.590 [2024-11-22 08:29:49.496179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:14.590 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:14.590 passed 00:09:14.590 Test: blockdev copy ...passed 00:09:14.590 Suite: bdevio tests on: Nvme2n1 00:09:14.590 Test: blockdev write read block ...passed 00:09:14.590 Test: blockdev write zeroes read block ...passed 00:09:14.590 Test: blockdev write zeroes read no split ...passed 00:09:14.590 Test: blockdev write zeroes read split ...passed 00:09:14.590 Test: blockdev write zeroes read split partial ...passed 00:09:14.590 Test: blockdev reset ...[2024-11-22 08:29:49.569243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:14.590 [2024-11-22 08:29:49.574268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:14.590 passed 00:09:14.590 Test: blockdev write read 8 blocks ...passed 00:09:14.590 Test: blockdev write read size > 128k ...passed 00:09:14.590 Test: blockdev write read invalid size ...passed 00:09:14.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.590 Test: blockdev write read max offset ...passed 00:09:14.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.590 Test: blockdev writev readv 8 blocks ...passed 00:09:14.590 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.590 Test: blockdev writev readv block ...passed 00:09:14.590 Test: blockdev writev readv size > 128k ...passed 00:09:14.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.591 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.583407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf234000 len:0x1000 00:09:14.591 [2024-11-22 08:29:49.583489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.591 passed 00:09:14.591 Test: blockdev nvme passthru rw ...passed 00:09:14.591 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:29:49.584443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:14.591 [2024-11-22 08:29:49.584482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:14.591 passed 00:09:14.591 Test: blockdev nvme admin passthru ...passed 00:09:14.591 Test: blockdev copy ...passed 00:09:14.591 Suite: bdevio tests on: Nvme1n1p2 00:09:14.591 Test: blockdev write read block ...passed 00:09:14.591 Test: blockdev write zeroes read block ...passed 00:09:14.591 Test: blockdev write zeroes read no split ...passed 00:09:14.591 Test: blockdev write zeroes read split ...passed 00:09:14.591 Test: blockdev write zeroes read split partial ...passed 00:09:14.591 Test: blockdev reset ...[2024-11-22 08:29:49.662204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:14.591 [2024-11-22 08:29:49.667046] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:14.591 passed 00:09:14.591 Test: blockdev write read 8 blocks ...passed 00:09:14.591 Test: blockdev write read size > 128k ...passed 00:09:14.591 Test: blockdev write read invalid size ...passed 00:09:14.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.591 Test: blockdev write read max offset ...passed 00:09:14.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.851 Test: blockdev writev readv 8 blocks ...passed 00:09:14.851 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.851 Test: blockdev writev readv block ...passed 00:09:14.851 Test: blockdev writev readv size > 128k ...passed 00:09:14.851 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.851 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.676265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bf230000 len:0x1000 00:09:14.851 [2024-11-22 08:29:49.676341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.851 passed 00:09:14.851 Test: blockdev nvme passthru rw ...passed 00:09:14.851 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.851 Test: blockdev nvme admin passthru ...passed 00:09:14.851 Test: blockdev copy ...passed 00:09:14.851 Suite: bdevio tests on: Nvme1n1p1 00:09:14.851 Test: blockdev write read block ...passed 00:09:14.851 Test: blockdev write zeroes read block ...passed 00:09:14.851 Test: blockdev write zeroes read no split ...passed 00:09:14.851 Test: blockdev write zeroes read split ...passed 00:09:14.852 Test: blockdev write zeroes read split partial ...passed 00:09:14.852 Test: blockdev reset ...[2024-11-22 08:29:49.745423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:14.852 [2024-11-22 08:29:49.750115] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:14.852 passed 00:09:14.852 Test: blockdev write read 8 blocks ...passed 00:09:14.852 Test: blockdev write read size > 128k ...passed 00:09:14.852 Test: blockdev write read invalid size ...passed 00:09:14.852 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.852 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.852 Test: blockdev write read max offset ...passed 00:09:14.852 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.852 Test: blockdev writev readv 8 blocks ...passed 00:09:14.852 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.852 Test: blockdev writev readv block ...passed 00:09:14.852 Test: blockdev writev readv size > 128k ...passed 00:09:14.852 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.852 Test: blockdev comparev and writev ...[2024-11-22 08:29:49.759524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ace0e000 len:0x1000 00:09:14.852 [2024-11-22 08:29:49.759583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:14.852 passed 00:09:14.852 Test: blockdev nvme passthru rw ...passed 00:09:14.852 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.852 Test: blockdev nvme admin passthru ...passed 00:09:14.852 Test: blockdev copy ...passed 00:09:14.852 Suite: bdevio tests on: Nvme0n1 00:09:14.852 Test: blockdev write read block ...passed 00:09:14.852 Test: blockdev write zeroes read block ...passed 00:09:14.852 Test: blockdev write zeroes read no split ...passed 00:09:14.852 Test: blockdev write zeroes read split ...passed 00:09:14.852 Test: blockdev write zeroes read split partial ...passed 00:09:14.852 Test: blockdev reset ...[2024-11-22 08:29:49.830543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:14.852 passed 00:09:14.852 Test: blockdev write read 8 blocks ...[2024-11-22 08:29:49.835095] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:14.852 passed 00:09:14.852 Test: blockdev write read size > 128k ...passed 00:09:14.852 Test: blockdev write read invalid size ...passed 00:09:14.852 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.852 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.852 Test: blockdev write read max offset ...passed 00:09:14.852 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.852 Test: blockdev writev readv 8 blocks ...passed 00:09:14.852 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.852 Test: blockdev writev readv block ...passed 00:09:14.852 Test: blockdev writev readv size > 128k ...passed 00:09:14.852 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.852 Test: blockdev comparev and writev ...passed 00:09:14.852 Test: blockdev nvme passthru rw ...[2024-11-22 08:29:49.843330] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:14.852 separate metadata which is not supported yet. 00:09:14.852 passed 00:09:14.852 Test: blockdev nvme passthru vendor specific ...[2024-11-22 08:29:49.843936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:14.852 [2024-11-22 08:29:49.843993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:14.852 passed 00:09:14.852 Test: blockdev nvme admin passthru ...passed 00:09:14.852 Test: blockdev copy ...passed 00:09:14.852 00:09:14.852 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.852 suites 7 7 n/a 0 0 00:09:14.852 tests 161 161 161 0 0 00:09:14.852 asserts 1025 1025 1025 0 n/a 00:09:14.852 00:09:14.852 Elapsed time = 1.686 seconds 00:09:14.852 0 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62559 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62559 ']' 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62559 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62559 00:09:14.852 killing process with pid 62559 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62559' 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62559 00:09:14.852 08:29:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62559 00:09:16.271 08:29:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:16.271 00:09:16.271 real 0m2.887s 00:09:16.271 user 0m7.357s 00:09:16.271 sys 0m0.411s 00:09:16.271 08:29:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.271 08:29:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:16.271 ************************************ 00:09:16.271 END TEST bdev_bounds 00:09:16.271 ************************************ 00:09:16.271 08:29:51 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:16.271 08:29:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.271 08:29:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.271 08:29:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:16.271 ************************************ 00:09:16.271 START TEST bdev_nbd 00:09:16.271 ************************************ 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62624 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62624 /var/tmp/spdk-nbd.sock 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62624 ']' 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.271 08:29:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:16.271 [2024-11-22 08:29:51.135543] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:16.271 [2024-11-22 08:29:51.135683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.271 [2024-11-22 08:29:51.319926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.530 [2024-11-22 08:29:51.431607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.100 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.359 1+0 records in 00:09:17.359 1+0 records out 00:09:17.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055016 s, 7.4 MB/s 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.359 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:17.618 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.618 1+0 records in 00:09:17.618 1+0 records out 00:09:17.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638453 s, 6.4 MB/s 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.619 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:17.878 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.879 1+0 records in 00:09:17.879 1+0 records out 00:09:17.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681436 s, 6.0 MB/s 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.879 08:29:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.138 1+0 records in 00:09:18.138 1+0 records out 00:09:18.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559323 s, 7.3 MB/s 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.138 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.397 1+0 records in 00:09:18.397 1+0 records out 00:09:18.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071925 s, 5.7 MB/s 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.397 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.657 1+0 records in 00:09:18.657 1+0 records out 00:09:18.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627753 s, 6.5 MB/s 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.657 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:18.916 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.917 1+0 records in 00:09:18.917 1+0 records out 00:09:18.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131377 s, 3.1 MB/s 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.917 08:29:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd0", 00:09:19.176 "bdev_name": "Nvme0n1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd1", 00:09:19.176 "bdev_name": "Nvme1n1p1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd2", 00:09:19.176 "bdev_name": "Nvme1n1p2" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd3", 00:09:19.176 "bdev_name": "Nvme2n1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd4", 00:09:19.176 "bdev_name": "Nvme2n2" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd5", 00:09:19.176 "bdev_name": "Nvme2n3" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd6", 00:09:19.176 "bdev_name": "Nvme3n1" 00:09:19.176 } 00:09:19.176 ]' 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd0", 00:09:19.176 "bdev_name": "Nvme0n1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd1", 00:09:19.176 "bdev_name": "Nvme1n1p1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd2", 00:09:19.176 "bdev_name": "Nvme1n1p2" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd3", 00:09:19.176 "bdev_name": "Nvme2n1" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd4", 00:09:19.176 "bdev_name": "Nvme2n2" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd5", 00:09:19.176 "bdev_name": "Nvme2n3" 00:09:19.176 }, 00:09:19.176 { 00:09:19.176 "nbd_device": "/dev/nbd6", 00:09:19.176 "bdev_name": "Nvme3n1" 00:09:19.176 } 00:09:19.176 ]' 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.176 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.436 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.695 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.954 08:29:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.213 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.472 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.731 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.990 08:29:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:20.990 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:21.279 /dev/nbd0 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:21.279 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.280 1+0 records in 00:09:21.280 1+0 records out 00:09:21.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469611 s, 8.7 MB/s 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.280 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:21.539 /dev/nbd1 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.539 1+0 records in 00:09:21.539 1+0 records out 00:09:21.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045191 s, 9.1 MB/s 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.539 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:21.798 /dev/nbd10 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.798 1+0 records in 00:09:21.798 1+0 records out 00:09:21.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798731 s, 5.1 MB/s 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:21.798 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.799 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:21.799 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:21.799 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.799 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.799 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:22.057 /dev/nbd11 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:22.057 1+0 records in 00:09:22.057 1+0 records out 00:09:22.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055467 s, 7.4 MB/s 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:22.057 08:29:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.057 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:22.057 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:22.057 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.057 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:22.057 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:22.316 /dev/nbd12 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:22.316 1+0 records in 00:09:22.316 1+0 records out 00:09:22.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814493 s, 5.0 MB/s 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:22.316 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:22.576 /dev/nbd13 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:22.576 1+0 records in 00:09:22.576 1+0 records out 00:09:22.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739316 s, 5.5 MB/s 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:22.576 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:22.837 /dev/nbd14 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:22.837 1+0 records in 00:09:22.837 1+0 records out 00:09:22.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929716 s, 4.4 MB/s 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.837 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.097 08:29:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd0", 00:09:23.097 "bdev_name": "Nvme0n1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd1", 00:09:23.097 "bdev_name": "Nvme1n1p1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd10", 00:09:23.097 "bdev_name": "Nvme1n1p2" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd11", 00:09:23.097 "bdev_name": "Nvme2n1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd12", 00:09:23.097 "bdev_name": "Nvme2n2" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd13", 00:09:23.097 "bdev_name": "Nvme2n3" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd14", 00:09:23.097 "bdev_name": "Nvme3n1" 00:09:23.097 } 00:09:23.097 ]' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd0", 00:09:23.097 "bdev_name": "Nvme0n1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd1", 00:09:23.097 "bdev_name": "Nvme1n1p1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd10", 00:09:23.097 "bdev_name": "Nvme1n1p2" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd11", 00:09:23.097 "bdev_name": "Nvme2n1" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd12", 00:09:23.097 "bdev_name": "Nvme2n2" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd13", 00:09:23.097 "bdev_name": "Nvme2n3" 00:09:23.097 }, 00:09:23.097 { 00:09:23.097 "nbd_device": "/dev/nbd14", 00:09:23.097 "bdev_name": "Nvme3n1" 00:09:23.097 } 00:09:23.097 ]' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.097 /dev/nbd1 00:09:23.097 /dev/nbd10 00:09:23.097 /dev/nbd11 00:09:23.097 /dev/nbd12 00:09:23.097 /dev/nbd13 00:09:23.097 /dev/nbd14' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.097 /dev/nbd1 00:09:23.097 /dev/nbd10 00:09:23.097 /dev/nbd11 00:09:23.097 /dev/nbd12 00:09:23.097 /dev/nbd13 00:09:23.097 /dev/nbd14' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:23.097 256+0 records in 00:09:23.097 256+0 records out 00:09:23.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133189 s, 78.7 MB/s 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.097 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.356 256+0 records in 00:09:23.356 256+0 records out 00:09:23.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150474 s, 7.0 MB/s 00:09:23.356 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.356 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.356 256+0 records in 00:09:23.356 256+0 records out 00:09:23.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146597 s, 7.2 MB/s 00:09:23.356 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.356 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:23.615 256+0 records in 00:09:23.615 256+0 records out 00:09:23.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148855 s, 7.0 MB/s 00:09:23.615 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.615 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:23.615 256+0 records in 00:09:23.615 256+0 records out 00:09:23.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14441 s, 7.3 MB/s 00:09:23.615 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.615 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:23.873 256+0 records in 00:09:23.873 256+0 records out 00:09:23.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147816 s, 7.1 MB/s 00:09:23.873 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.873 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:24.133 256+0 records in 00:09:24.133 256+0 records out 00:09:24.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14521 s, 7.2 MB/s 00:09:24.133 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:24.133 256+0 records in 00:09:24.133 256+0 records out 00:09:24.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146084 s, 7.2 MB/s 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.133 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:24.391 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.392 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.651 08:29:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.220 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.480 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.740 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.000 08:30:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:26.259 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:26.518 malloc_lvol_verify 00:09:26.518 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:26.518 05f66ed8-dfde-4637-af20-2353603798f8 00:09:26.776 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:26.776 8dbc1ebc-8d3e-434c-a2c7-f41efbea566a 00:09:26.776 08:30:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:27.034 /dev/nbd0 00:09:27.034 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:27.034 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:27.034 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:27.034 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:27.034 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:27.034 mke2fs 1.47.0 (5-Feb-2023) 00:09:27.034 Discarding device blocks: 0/4096 done 00:09:27.034 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:27.034 00:09:27.034 Allocating group tables: 0/1 done 00:09:27.034 Writing inode tables: 0/1 done 00:09:27.291 Creating journal (1024 blocks): done 00:09:27.291 Writing superblocks and filesystem accounting information: 0/1 done 00:09:27.291 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:27.291 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62624 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62624 ']' 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62624 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.292 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62624 00:09:27.549 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.549 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.549 killing process with pid 62624 00:09:27.549 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62624' 00:09:27.549 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62624 00:09:27.549 08:30:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62624 00:09:28.927 08:30:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:28.927 00:09:28.927 real 0m12.564s 00:09:28.927 user 0m16.206s 00:09:28.927 sys 0m5.309s 00:09:28.927 ************************************ 00:09:28.927 END TEST bdev_nbd 00:09:28.927 ************************************ 00:09:28.927 08:30:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.927 08:30:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:28.927 skipping fio tests on NVMe due to multi-ns failures. 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:28.927 08:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:28.927 08:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:28.927 08:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.927 08:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:28.927 ************************************ 00:09:28.927 START TEST bdev_verify 00:09:28.927 ************************************ 00:09:28.927 08:30:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:28.927 [2024-11-22 08:30:03.774257] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:28.927 [2024-11-22 08:30:03.774388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63050 ] 00:09:28.927 [2024-11-22 08:30:03.959716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.186 [2024-11-22 08:30:04.082338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.186 [2024-11-22 08:30:04.082376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.752 Running I/O for 5 seconds... 00:09:32.061 22144.00 IOPS, 86.50 MiB/s [2024-11-22T08:30:08.083Z] 19648.00 IOPS, 76.75 MiB/s [2024-11-22T08:30:09.458Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-22T08:30:10.026Z] 18544.00 IOPS, 72.44 MiB/s [2024-11-22T08:30:10.026Z] 18841.60 IOPS, 73.60 MiB/s 00:09:34.939 Latency(us) 00:09:34.939 [2024-11-22T08:30:10.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.939 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0xbd0bd 00:09:34.939 Nvme0n1 : 5.04 1398.00 5.46 0.00 0.00 91212.09 19371.28 83380.74 00:09:34.939 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:34.939 Nvme0n1 : 5.07 1250.57 4.89 0.00 0.00 101823.04 14107.35 92645.27 00:09:34.939 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x4ff80 00:09:34.939 Nvme1n1p1 : 5.07 1401.12 5.47 0.00 0.00 90774.85 9843.56 76642.90 00:09:34.939 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:34.939 Nvme1n1p1 : 5.08 1259.32 4.92 0.00 0.00 101281.36 13159.84 85907.43 00:09:34.939 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x4ff7f 00:09:34.939 Nvme1n1p2 : 5.09 1409.60 5.51 0.00 0.00 90317.81 11317.46 74958.44 00:09:34.939 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:34.939 Nvme1n1p2 : 5.08 1258.63 4.92 0.00 0.00 101002.99 14423.18 87591.89 00:09:34.939 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x80000 00:09:34.939 Nvme2n1 : 5.09 1408.76 5.50 0.00 0.00 90238.49 13475.68 73695.10 00:09:34.939 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x80000 length 0x80000 00:09:34.939 Nvme2n1 : 5.09 1258.19 4.91 0.00 0.00 100815.92 15581.25 90539.69 00:09:34.939 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x80000 00:09:34.939 Nvme2n2 : 5.09 1408.35 5.50 0.00 0.00 90132.80 13265.12 73695.10 00:09:34.939 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x80000 length 0x80000 00:09:34.939 Nvme2n2 : 5.09 1257.94 4.91 0.00 0.00 100647.62 15370.69 92224.15 00:09:34.939 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x80000 00:09:34.939 Nvme2n3 : 5.09 1407.93 5.50 0.00 0.00 89992.98 13370.40 75800.67 00:09:34.939 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x80000 length 0x80000 00:09:34.939 Nvme2n3 : 5.09 1257.63 4.91 0.00 0.00 100506.92 15160.13 96014.19 00:09:34.939 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x0 length 0x20000 00:09:34.939 Nvme3n1 : 5.09 1407.58 5.50 0.00 0.00 89858.28 13370.40 79169.59 00:09:34.939 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:34.939 Verification LBA range: start 0x20000 length 0x20000 00:09:34.939 Nvme3n1 : 5.09 1257.28 4.91 0.00 0.00 100405.37 14739.02 97277.53 00:09:34.939 [2024-11-22T08:30:10.026Z] =================================================================================================================== 00:09:34.939 [2024-11-22T08:30:10.026Z] Total : 18640.89 72.82 0.00 0.00 95348.89 9843.56 97277.53 00:09:36.317 00:09:36.317 real 0m7.590s 00:09:36.317 user 0m14.008s 00:09:36.317 sys 0m0.316s 00:09:36.317 08:30:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.317 08:30:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 ************************************ 00:09:36.317 END TEST bdev_verify 00:09:36.317 ************************************ 00:09:36.317 08:30:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:36.317 08:30:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:36.317 08:30:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.317 08:30:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 ************************************ 00:09:36.317 START TEST bdev_verify_big_io 00:09:36.317 ************************************ 00:09:36.317 08:30:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:36.576 [2024-11-22 08:30:11.436603] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:36.576 [2024-11-22 08:30:11.436735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:09:36.576 [2024-11-22 08:30:11.617888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.835 [2024-11-22 08:30:11.734820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.835 [2024-11-22 08:30:11.734848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.773 Running I/O for 5 seconds... 00:09:42.842 2308.00 IOPS, 144.25 MiB/s [2024-11-22T08:30:18.496Z] 3267.00 IOPS, 204.19 MiB/s [2024-11-22T08:30:18.496Z] 3928.33 IOPS, 245.52 MiB/s 00:09:43.409 Latency(us) 00:09:43.409 [2024-11-22T08:30:18.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.409 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x0 length 0xbd0b 00:09:43.409 Nvme0n1 : 5.81 107.31 6.71 0.00 0.00 1162007.40 33268.07 2196535.11 00:09:43.409 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:43.409 Nvme0n1 : 5.44 144.29 9.02 0.00 0.00 854597.49 35373.65 1293664.85 00:09:43.409 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x0 length 0x4ff8 00:09:43.409 Nvme1n1p1 : 5.76 133.80 8.36 0.00 0.00 909422.66 76642.90 1010675.66 00:09:43.409 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:43.409 Nvme1n1p1 : 5.53 150.41 9.40 0.00 0.00 809585.73 86749.66 1327354.04 00:09:43.409 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x0 length 0x4ff7 00:09:43.409 Nvme1n1p2 : 5.76 133.58 8.35 0.00 0.00 888064.79 76221.79 983724.31 00:09:43.409 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:43.409 Nvme1n1p2 : 5.66 162.98 10.19 0.00 0.00 729962.92 88434.12 976986.47 00:09:43.409 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.409 Verification LBA range: start 0x0 length 0x8000 00:09:43.410 Nvme2n1 : 5.73 134.08 8.38 0.00 0.00 868271.94 76221.79 997199.99 00:09:43.410 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x8000 length 0x8000 00:09:43.410 Nvme2n1 : 5.67 166.97 10.44 0.00 0.00 700617.27 40637.58 990462.15 00:09:43.410 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x0 length 0x8000 00:09:43.410 Nvme2n2 : 5.77 129.89 8.12 0.00 0.00 877791.86 35794.76 1536227.01 00:09:43.410 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x8000 length 0x8000 00:09:43.410 Nvme2n2 : 5.73 174.28 10.89 0.00 0.00 655427.62 37268.67 909608.10 00:09:43.410 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x0 length 0x8000 00:09:43.410 Nvme2n3 : 5.81 136.72 8.55 0.00 0.00 816653.69 22845.48 1549702.68 00:09:43.410 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x8000 length 0x8000 00:09:43.410 Nvme2n3 : 5.76 181.91 11.37 0.00 0.00 614854.11 31373.06 862443.23 00:09:43.410 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x0 length 0x2000 00:09:43.410 Nvme3n1 : 5.82 151.05 9.44 0.00 0.00 724551.68 4448.03 1563178.36 00:09:43.410 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:43.410 Verification LBA range: start 0x2000 length 0x2000 00:09:43.410 Nvme3n1 : 5.80 199.78 12.49 0.00 0.00 548799.06 3974.27 875918.91 00:09:43.410 [2024-11-22T08:30:18.497Z] =================================================================================================================== 00:09:43.410 [2024-11-22T08:30:18.497Z] Total : 2107.04 131.69 0.00 0.00 774980.29 3974.27 2196535.11 00:09:45.311 00:09:45.311 real 0m8.977s 00:09:45.311 user 0m16.744s 00:09:45.311 sys 0m0.371s 00:09:45.311 ************************************ 00:09:45.311 END TEST bdev_verify_big_io 00:09:45.311 ************************************ 00:09:45.311 08:30:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.311 08:30:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:45.311 08:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:45.311 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:45.311 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.311 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:45.311 ************************************ 00:09:45.311 START TEST bdev_write_zeroes 00:09:45.311 ************************************ 00:09:45.311 08:30:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:45.571 [2024-11-22 08:30:20.481769] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:45.571 [2024-11-22 08:30:20.481895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63268 ] 00:09:45.831 [2024-11-22 08:30:20.658985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.831 [2024-11-22 08:30:20.763304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.399 Running I/O for 1 seconds... 00:09:47.771 77056.00 IOPS, 301.00 MiB/s 00:09:47.771 Latency(us) 00:09:47.771 [2024-11-22T08:30:22.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.771 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme0n1 : 1.02 10983.96 42.91 0.00 0.00 11625.38 10054.12 35373.65 00:09:47.771 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme1n1p1 : 1.02 10972.47 42.86 0.00 0.00 11623.16 9843.56 36215.88 00:09:47.771 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme1n1p2 : 1.02 10961.10 42.82 0.00 0.00 11608.93 9790.92 36215.88 00:09:47.771 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme2n1 : 1.02 10951.42 42.78 0.00 0.00 11585.97 10001.48 34741.98 00:09:47.771 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme2n2 : 1.02 10941.73 42.74 0.00 0.00 11559.44 10001.48 32846.96 00:09:47.771 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme2n3 : 1.03 10987.38 42.92 0.00 0.00 11481.10 6527.28 29056.93 00:09:47.771 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:47.771 Nvme3n1 : 1.03 10977.44 42.88 0.00 0.00 11456.61 6737.84 27583.02 00:09:47.771 [2024-11-22T08:30:22.858Z] =================================================================================================================== 00:09:47.771 [2024-11-22T08:30:22.858Z] Total : 76775.52 299.90 0.00 0.00 11562.79 6527.28 36215.88 00:09:48.707 00:09:48.707 real 0m3.293s 00:09:48.707 user 0m2.914s 00:09:48.707 sys 0m0.266s 00:09:48.707 08:30:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.707 08:30:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:48.707 ************************************ 00:09:48.707 END TEST bdev_write_zeroes 00:09:48.707 ************************************ 00:09:48.707 08:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:48.707 08:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:48.707 08:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.707 08:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.707 ************************************ 00:09:48.707 START TEST bdev_json_nonenclosed 00:09:48.707 ************************************ 00:09:48.707 08:30:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:48.965 [2024-11-22 08:30:23.842203] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:48.965 [2024-11-22 08:30:23.842327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63321 ] 00:09:48.965 [2024-11-22 08:30:24.024021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.223 [2024-11-22 08:30:24.156241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.223 [2024-11-22 08:30:24.156341] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:49.223 [2024-11-22 08:30:24.156364] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:49.223 [2024-11-22 08:30:24.156375] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.482 00:09:49.482 real 0m0.674s 00:09:49.482 user 0m0.391s 00:09:49.482 sys 0m0.179s 00:09:49.482 08:30:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.482 08:30:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:49.482 ************************************ 00:09:49.482 END TEST bdev_json_nonenclosed 00:09:49.482 ************************************ 00:09:49.482 08:30:24 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:49.482 08:30:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:49.482 08:30:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.482 08:30:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:49.482 ************************************ 00:09:49.482 START TEST bdev_json_nonarray 00:09:49.482 ************************************ 00:09:49.482 08:30:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:49.741 [2024-11-22 08:30:24.567098] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:49.741 [2024-11-22 08:30:24.567208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63347 ] 00:09:49.741 [2024-11-22 08:30:24.748009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.000 [2024-11-22 08:30:24.871544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.000 [2024-11-22 08:30:24.871656] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:50.000 [2024-11-22 08:30:24.871680] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:50.000 [2024-11-22 08:30:24.871693] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:50.260 00:09:50.260 real 0m0.637s 00:09:50.260 user 0m0.386s 00:09:50.260 sys 0m0.146s 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 ************************************ 00:09:50.260 END TEST bdev_json_nonarray 00:09:50.260 ************************************ 00:09:50.260 08:30:25 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:50.260 08:30:25 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:50.260 08:30:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:50.260 08:30:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.260 08:30:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.260 08:30:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 ************************************ 00:09:50.260 START TEST bdev_gpt_uuid 00:09:50.260 ************************************ 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63372 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63372 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63372 ']' 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.260 08:30:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 [2024-11-22 08:30:25.287529] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:09:50.260 [2024-11-22 08:30:25.287659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63372 ] 00:09:50.519 [2024-11-22 08:30:25.468599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.519 [2024-11-22 08:30:25.598539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:51.898 Some configs were skipped because the RPC state that can call them passed over. 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:51.898 { 00:09:51.898 "name": "Nvme1n1p1", 00:09:51.898 "aliases": [ 00:09:51.898 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:51.898 ], 00:09:51.898 "product_name": "GPT Disk", 00:09:51.898 "block_size": 4096, 00:09:51.898 "num_blocks": 655104, 00:09:51.898 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:51.898 "assigned_rate_limits": { 00:09:51.898 "rw_ios_per_sec": 0, 00:09:51.898 "rw_mbytes_per_sec": 0, 00:09:51.898 "r_mbytes_per_sec": 0, 00:09:51.898 "w_mbytes_per_sec": 0 00:09:51.898 }, 00:09:51.898 "claimed": false, 00:09:51.898 "zoned": false, 00:09:51.898 "supported_io_types": { 00:09:51.898 "read": true, 00:09:51.898 "write": true, 00:09:51.898 "unmap": true, 00:09:51.898 "flush": true, 00:09:51.898 "reset": true, 00:09:51.898 "nvme_admin": false, 00:09:51.898 "nvme_io": false, 00:09:51.898 "nvme_io_md": false, 00:09:51.898 "write_zeroes": true, 00:09:51.898 "zcopy": false, 00:09:51.898 "get_zone_info": false, 00:09:51.898 "zone_management": false, 00:09:51.898 "zone_append": false, 00:09:51.898 "compare": true, 00:09:51.898 "compare_and_write": false, 00:09:51.898 "abort": true, 00:09:51.898 "seek_hole": false, 00:09:51.898 "seek_data": false, 00:09:51.898 "copy": true, 00:09:51.898 "nvme_iov_md": false 00:09:51.898 }, 00:09:51.898 "driver_specific": { 00:09:51.898 "gpt": { 00:09:51.898 "base_bdev": "Nvme1n1", 00:09:51.898 "offset_blocks": 256, 00:09:51.898 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:51.898 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:51.898 "partition_name": "SPDK_TEST_first" 00:09:51.898 } 00:09:51.898 } 00:09:51.898 } 00:09:51.898 ]' 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:51.898 08:30:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.157 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:52.158 { 00:09:52.158 "name": "Nvme1n1p2", 00:09:52.158 "aliases": [ 00:09:52.158 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:52.158 ], 00:09:52.158 "product_name": "GPT Disk", 00:09:52.158 "block_size": 4096, 00:09:52.158 "num_blocks": 655103, 00:09:52.158 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:52.158 "assigned_rate_limits": { 00:09:52.158 "rw_ios_per_sec": 0, 00:09:52.158 "rw_mbytes_per_sec": 0, 00:09:52.158 "r_mbytes_per_sec": 0, 00:09:52.158 "w_mbytes_per_sec": 0 00:09:52.158 }, 00:09:52.158 "claimed": false, 00:09:52.158 "zoned": false, 00:09:52.158 "supported_io_types": { 00:09:52.158 "read": true, 00:09:52.158 "write": true, 00:09:52.158 "unmap": true, 00:09:52.158 "flush": true, 00:09:52.158 "reset": true, 00:09:52.158 "nvme_admin": false, 00:09:52.158 "nvme_io": false, 00:09:52.158 "nvme_io_md": false, 00:09:52.158 "write_zeroes": true, 00:09:52.158 "zcopy": false, 00:09:52.158 "get_zone_info": false, 00:09:52.158 "zone_management": false, 00:09:52.158 "zone_append": false, 00:09:52.158 "compare": true, 00:09:52.158 "compare_and_write": false, 00:09:52.158 "abort": true, 00:09:52.158 "seek_hole": false, 00:09:52.158 "seek_data": false, 00:09:52.158 "copy": true, 00:09:52.158 "nvme_iov_md": false 00:09:52.158 }, 00:09:52.158 "driver_specific": { 00:09:52.158 "gpt": { 00:09:52.158 "base_bdev": "Nvme1n1", 00:09:52.158 "offset_blocks": 655360, 00:09:52.158 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:52.158 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:52.158 "partition_name": "SPDK_TEST_second" 00:09:52.158 } 00:09:52.158 } 00:09:52.158 } 00:09:52.158 ]' 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63372 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63372 ']' 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63372 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.158 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63372 00:09:52.417 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.417 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.417 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63372' 00:09:52.417 killing process with pid 63372 00:09:52.417 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63372 00:09:52.417 08:30:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63372 00:09:54.959 00:09:54.960 real 0m4.542s 00:09:54.960 user 0m4.505s 00:09:54.960 sys 0m0.673s 00:09:54.960 ************************************ 00:09:54.960 END TEST bdev_gpt_uuid 00:09:54.960 ************************************ 00:09:54.960 08:30:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.960 08:30:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:54.960 08:30:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:55.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.784 Waiting for block devices as requested 00:09:55.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.043 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.043 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.320 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:01.320 08:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:01.320 08:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:01.579 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:01.579 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:01.579 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:01.579 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:01.579 08:30:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:01.579 00:10:01.579 real 1m5.249s 00:10:01.579 user 1m20.526s 00:10:01.579 sys 0m12.456s 00:10:01.579 08:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.579 ************************************ 00:10:01.579 END TEST blockdev_nvme_gpt 00:10:01.579 ************************************ 00:10:01.579 08:30:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:01.579 08:30:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:01.579 08:30:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.579 08:30:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.579 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:10:01.579 ************************************ 00:10:01.579 START TEST nvme 00:10:01.579 ************************************ 00:10:01.579 08:30:36 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:01.579 * Looking for test storage... 00:10:01.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:01.579 08:30:36 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.579 08:30:36 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.579 08:30:36 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.839 08:30:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.839 08:30:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.839 08:30:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.839 08:30:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.839 08:30:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.839 08:30:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:01.839 08:30:36 nvme -- scripts/common.sh@345 -- # : 1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.839 08:30:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.839 08:30:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@353 -- # local d=1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.839 08:30:36 nvme -- scripts/common.sh@355 -- # echo 1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.839 08:30:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@353 -- # local d=2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.839 08:30:36 nvme -- scripts/common.sh@355 -- # echo 2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.839 08:30:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.839 08:30:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.839 08:30:36 nvme -- scripts/common.sh@368 -- # return 0 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.839 --rc genhtml_branch_coverage=1 00:10:01.839 --rc genhtml_function_coverage=1 00:10:01.839 --rc genhtml_legend=1 00:10:01.839 --rc geninfo_all_blocks=1 00:10:01.839 --rc geninfo_unexecuted_blocks=1 00:10:01.839 00:10:01.839 ' 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.839 --rc genhtml_branch_coverage=1 00:10:01.839 --rc genhtml_function_coverage=1 00:10:01.839 --rc genhtml_legend=1 00:10:01.839 --rc geninfo_all_blocks=1 00:10:01.839 --rc geninfo_unexecuted_blocks=1 00:10:01.839 00:10:01.839 ' 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.839 --rc genhtml_branch_coverage=1 00:10:01.839 --rc genhtml_function_coverage=1 00:10:01.839 --rc genhtml_legend=1 00:10:01.839 --rc geninfo_all_blocks=1 00:10:01.839 --rc geninfo_unexecuted_blocks=1 00:10:01.839 00:10:01.839 ' 00:10:01.839 08:30:36 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.839 --rc genhtml_branch_coverage=1 00:10:01.839 --rc genhtml_function_coverage=1 00:10:01.839 --rc genhtml_legend=1 00:10:01.839 --rc geninfo_all_blocks=1 00:10:01.839 --rc geninfo_unexecuted_blocks=1 00:10:01.839 00:10:01.839 ' 00:10:01.839 08:30:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:02.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.346 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.346 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.346 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.346 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.606 08:30:38 nvme -- nvme/nvme.sh@79 -- # uname 00:10:03.606 08:30:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:03.606 08:30:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:03.606 08:30:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1075 -- # stubpid=64045 00:10:03.606 Waiting for stub to ready for secondary processes... 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64045 ]] 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:03.606 08:30:38 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:03.606 [2024-11-22 08:30:38.498107] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:10:03.606 [2024-11-22 08:30:38.498252] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:04.544 08:30:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:04.544 08:30:39 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64045 ]] 00:10:04.544 08:30:39 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:04.544 [2024-11-22 08:30:39.522238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.544 [2024-11-22 08:30:39.618843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.544 [2024-11-22 08:30:39.618975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.544 [2024-11-22 08:30:39.619031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.804 [2024-11-22 08:30:39.636400] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:04.804 [2024-11-22 08:30:39.636436] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:04.804 [2024-11-22 08:30:39.651482] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:04.804 [2024-11-22 08:30:39.651595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:04.804 [2024-11-22 08:30:39.654739] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:04.804 [2024-11-22 08:30:39.654923] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:04.804 [2024-11-22 08:30:39.655007] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:04.804 [2024-11-22 08:30:39.658154] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:04.804 [2024-11-22 08:30:39.658378] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:04.804 [2024-11-22 08:30:39.658485] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:04.804 [2024-11-22 08:30:39.662792] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:04.804 [2024-11-22 08:30:39.663065] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:04.804 [2024-11-22 08:30:39.663176] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:04.804 [2024-11-22 08:30:39.663251] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:04.804 [2024-11-22 08:30:39.663317] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:05.743 08:30:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:05.743 done. 00:10:05.743 08:30:40 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:10:05.743 08:30:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:05.743 08:30:40 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:10:05.743 08:30:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.743 08:30:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.743 ************************************ 00:10:05.743 START TEST nvme_reset 00:10:05.743 ************************************ 00:10:05.743 08:30:40 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:05.743 Initializing NVMe Controllers 00:10:05.743 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:05.743 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:05.743 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:05.743 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:05.743 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:05.743 00:10:05.743 real 0m0.318s 00:10:05.743 user 0m0.098s 00:10:05.743 sys 0m0.179s 00:10:05.743 08:30:40 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.743 08:30:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:05.743 ************************************ 00:10:05.743 END TEST nvme_reset 00:10:05.743 ************************************ 00:10:06.003 08:30:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:06.003 08:30:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.003 08:30:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.003 08:30:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.003 ************************************ 00:10:06.003 START TEST nvme_identify 00:10:06.003 ************************************ 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:10:06.003 08:30:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:06.003 08:30:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:06.003 08:30:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.003 08:30:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:06.003 08:30:40 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.003 08:30:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:06.265 [2024-11-22 08:30:41.219406] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64078 terminated unexpected 00:10:06.265 ===================================================== 00:10:06.265 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.265 ===================================================== 00:10:06.265 Controller Capabilities/Features 00:10:06.265 ================================ 00:10:06.265 Vendor ID: 1b36 00:10:06.265 Subsystem Vendor ID: 1af4 00:10:06.265 Serial Number: 12340 00:10:06.265 Model Number: QEMU NVMe Ctrl 00:10:06.265 Firmware Version: 8.0.0 00:10:06.265 Recommended Arb Burst: 6 00:10:06.265 IEEE OUI Identifier: 00 54 52 00:10:06.265 Multi-path I/O 00:10:06.265 May have multiple subsystem ports: No 00:10:06.265 May have multiple controllers: No 00:10:06.265 Associated with SR-IOV VF: No 00:10:06.265 Max Data Transfer Size: 524288 00:10:06.265 Max Number of Namespaces: 256 00:10:06.265 Max Number of I/O Queues: 64 00:10:06.265 NVMe Specification Version (VS): 1.4 00:10:06.265 NVMe Specification Version (Identify): 1.4 00:10:06.265 Maximum Queue Entries: 2048 00:10:06.265 Contiguous Queues Required: Yes 00:10:06.265 Arbitration Mechanisms Supported 00:10:06.265 Weighted Round Robin: Not Supported 00:10:06.265 Vendor Specific: Not Supported 00:10:06.265 Reset Timeout: 7500 ms 00:10:06.265 Doorbell Stride: 4 bytes 00:10:06.265 NVM Subsystem Reset: Not Supported 00:10:06.265 Command Sets Supported 00:10:06.265 NVM Command Set: Supported 00:10:06.265 Boot Partition: Not Supported 00:10:06.265 Memory Page Size Minimum: 4096 bytes 00:10:06.265 Memory Page Size Maximum: 65536 bytes 00:10:06.265 Persistent Memory Region: Not Supported 00:10:06.265 Optional Asynchronous Events Supported 00:10:06.265 Namespace Attribute Notices: Supported 00:10:06.265 Firmware Activation Notices: Not Supported 00:10:06.265 ANA Change Notices: Not Supported 00:10:06.265 PLE Aggregate Log Change Notices: Not Supported 00:10:06.265 LBA Status Info Alert Notices: Not Supported 00:10:06.265 EGE Aggregate Log Change Notices: Not Supported 00:10:06.265 Normal NVM Subsystem Shutdown event: Not Supported 00:10:06.265 Zone Descriptor Change Notices: Not Supported 00:10:06.265 Discovery Log Change Notices: Not Supported 00:10:06.265 Controller Attributes 00:10:06.265 128-bit Host Identifier: Not Supported 00:10:06.265 Non-Operational Permissive Mode: Not Supported 00:10:06.265 NVM Sets: Not Supported 00:10:06.265 Read Recovery Levels: Not Supported 00:10:06.265 Endurance Groups: Not Supported 00:10:06.265 Predictable Latency Mode: Not Supported 00:10:06.265 Traffic Based Keep ALive: Not Supported 00:10:06.265 Namespace Granularity: Not Supported 00:10:06.265 SQ Associations: Not Supported 00:10:06.265 UUID List: Not Supported 00:10:06.265 Multi-Domain Subsystem: Not Supported 00:10:06.265 Fixed Capacity Management: Not Supported 00:10:06.265 Variable Capacity Management: Not Supported 00:10:06.265 Delete Endurance Group: Not Supported 00:10:06.265 Delete NVM Set: Not Supported 00:10:06.265 Extended LBA Formats Supported: Supported 00:10:06.265 Flexible Data Placement Supported: Not Supported 00:10:06.265 00:10:06.265 Controller Memory Buffer Support 00:10:06.265 ================================ 00:10:06.265 Supported: No 00:10:06.265 00:10:06.266 Persistent Memory Region Support 00:10:06.266 ================================ 00:10:06.266 Supported: No 00:10:06.266 00:10:06.266 Admin Command Set Attributes 00:10:06.266 ============================ 00:10:06.266 Security Send/Receive: Not Supported 00:10:06.266 Format NVM: Supported 00:10:06.266 Firmware Activate/Download: Not Supported 00:10:06.266 Namespace Management: Supported 00:10:06.266 Device Self-Test: Not Supported 00:10:06.266 Directives: Supported 00:10:06.266 NVMe-MI: Not Supported 00:10:06.266 Virtualization Management: Not Supported 00:10:06.266 Doorbell Buffer Config: Supported 00:10:06.266 Get LBA Status Capability: Not Supported 00:10:06.266 Command & Feature Lockdown Capability: Not Supported 00:10:06.266 Abort Command Limit: 4 00:10:06.266 Async Event Request Limit: 4 00:10:06.266 Number of Firmware Slots: N/A 00:10:06.266 Firmware Slot 1 Read-Only: N/A 00:10:06.266 Firmware Activation Without Reset: N/A 00:10:06.266 Multiple Update Detection Support: N/A 00:10:06.266 Firmware Update Granularity: No Information Provided 00:10:06.266 Per-Namespace SMART Log: Yes 00:10:06.266 Asymmetric Namespace Access Log Page: Not Supported 00:10:06.266 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:06.266 Command Effects Log Page: Supported 00:10:06.266 Get Log Page Extended Data: Supported 00:10:06.266 Telemetry Log Pages: Not Supported 00:10:06.266 Persistent Event Log Pages: Not Supported 00:10:06.266 Supported Log Pages Log Page: May Support 00:10:06.266 Commands Supported & Effects Log Page: Not Supported 00:10:06.266 Feature Identifiers & Effects Log Page:May Support 00:10:06.266 NVMe-MI Commands & Effects Log Page: May Support 00:10:06.266 Data Area 4 for Telemetry Log: Not Supported 00:10:06.266 Error Log Page Entries Supported: 1 00:10:06.266 Keep Alive: Not Supported 00:10:06.266 00:10:06.266 NVM Command Set Attributes 00:10:06.266 ========================== 00:10:06.266 Submission Queue Entry Size 00:10:06.266 Max: 64 00:10:06.266 Min: 64 00:10:06.266 Completion Queue Entry Size 00:10:06.266 Max: 16 00:10:06.266 Min: 16 00:10:06.266 Number of Namespaces: 256 00:10:06.266 Compare Command: Supported 00:10:06.266 Write Uncorrectable Command: Not Supported 00:10:06.266 Dataset Management Command: Supported 00:10:06.266 Write Zeroes Command: Supported 00:10:06.266 Set Features Save Field: Supported 00:10:06.266 Reservations: Not Supported 00:10:06.266 Timestamp: Supported 00:10:06.266 Copy: Supported 00:10:06.266 Volatile Write Cache: Present 00:10:06.266 Atomic Write Unit (Normal): 1 00:10:06.266 Atomic Write Unit (PFail): 1 00:10:06.266 Atomic Compare & Write Unit: 1 00:10:06.266 Fused Compare & Write: Not Supported 00:10:06.266 Scatter-Gather List 00:10:06.266 SGL Command Set: Supported 00:10:06.266 SGL Keyed: Not Supported 00:10:06.266 SGL Bit Bucket Descriptor: Not Supported 00:10:06.266 SGL Metadata Pointer: Not Supported 00:10:06.266 Oversized SGL: Not Supported 00:10:06.266 SGL Metadata Address: Not Supported 00:10:06.266 SGL Offset: Not Supported 00:10:06.266 Transport SGL Data Block: Not Supported 00:10:06.266 Replay Protected Memory Block: Not Supported 00:10:06.266 00:10:06.266 Firmware Slot Information 00:10:06.266 ========================= 00:10:06.266 Active slot: 1 00:10:06.266 Slot 1 Firmware Revision: 1.0 00:10:06.266 00:10:06.266 00:10:06.266 Commands Supported and Effects 00:10:06.266 ============================== 00:10:06.266 Admin Commands 00:10:06.266 -------------- 00:10:06.266 Delete I/O Submission Queue (00h): Supported 00:10:06.266 Create I/O Submission Queue (01h): Supported 00:10:06.266 Get Log Page (02h): Supported 00:10:06.266 Delete I/O Completion Queue (04h): Supported 00:10:06.266 Create I/O Completion Queue (05h): Supported 00:10:06.266 Identify (06h): Supported 00:10:06.266 Abort (08h): Supported 00:10:06.266 Set Features (09h): Supported 00:10:06.266 Get Features (0Ah): Supported 00:10:06.266 Asynchronous Event Request (0Ch): Supported 00:10:06.266 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:06.266 Directive Send (19h): Supported 00:10:06.266 Directive Receive (1Ah): Supported 00:10:06.266 Virtualization Management (1Ch): Supported 00:10:06.266 Doorbell Buffer Config (7Ch): Supported 00:10:06.266 Format NVM (80h): Supported LBA-Change 00:10:06.266 I/O Commands 00:10:06.266 ------------ 00:10:06.266 Flush (00h): Supported LBA-Change 00:10:06.266 Write (01h): Supported LBA-Change 00:10:06.266 Read (02h): Supported 00:10:06.266 Compare (05h): Supported 00:10:06.266 Write Zeroes (08h): Supported LBA-Change 00:10:06.266 Dataset Management (09h): Supported LBA-Change 00:10:06.266 Unknown (0Ch): Supported 00:10:06.266 Unknown (12h): Supported 00:10:06.266 Copy (19h): Supported LBA-Change 00:10:06.266 Unknown (1Dh): Supported LBA-Change 00:10:06.266 00:10:06.266 Error Log 00:10:06.266 ========= 00:10:06.266 00:10:06.266 Arbitration 00:10:06.266 =========== 00:10:06.266 Arbitration Burst: no limit 00:10:06.266 00:10:06.266 Power Management 00:10:06.266 ================ 00:10:06.266 Number of Power States: 1 00:10:06.266 Current Power State: Power State #0 00:10:06.266 Power State #0: 00:10:06.266 Max Power: 25.00 W 00:10:06.266 Non-Operational State: Operational 00:10:06.266 Entry Latency: 16 microseconds 00:10:06.266 Exit Latency: 4 microseconds 00:10:06.266 Relative Read Throughput: 0 00:10:06.266 Relative Read Latency: 0 00:10:06.266 Relative Write Throughput: 0 00:10:06.266 Relative Write Latency: 0 00:10:06.266 Idle Power[2024-11-22 08:30:41.220890] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64078 terminated unexpected 00:10:06.266 : Not Reported 00:10:06.266 Active Power: Not Reported 00:10:06.266 Non-Operational Permissive Mode: Not Supported 00:10:06.266 00:10:06.266 Health Information 00:10:06.266 ================== 00:10:06.266 Critical Warnings: 00:10:06.266 Available Spare Space: OK 00:10:06.266 Temperature: OK 00:10:06.266 Device Reliability: OK 00:10:06.266 Read Only: No 00:10:06.266 Volatile Memory Backup: OK 00:10:06.266 Current Temperature: 323 Kelvin (50 Celsius) 00:10:06.266 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:06.266 Available Spare: 0% 00:10:06.266 Available Spare Threshold: 0% 00:10:06.266 Life Percentage Used: 0% 00:10:06.266 Data Units Read: 737 00:10:06.266 Data Units Written: 665 00:10:06.266 Host Read Commands: 33073 00:10:06.266 Host Write Commands: 32859 00:10:06.266 Controller Busy Time: 0 minutes 00:10:06.266 Power Cycles: 0 00:10:06.266 Power On Hours: 0 hours 00:10:06.266 Unsafe Shutdowns: 0 00:10:06.266 Unrecoverable Media Errors: 0 00:10:06.266 Lifetime Error Log Entries: 0 00:10:06.266 Warning Temperature Time: 0 minutes 00:10:06.266 Critical Temperature Time: 0 minutes 00:10:06.266 00:10:06.266 Number of Queues 00:10:06.266 ================ 00:10:06.266 Number of I/O Submission Queues: 64 00:10:06.266 Number of I/O Completion Queues: 64 00:10:06.266 00:10:06.266 ZNS Specific Controller Data 00:10:06.266 ============================ 00:10:06.266 Zone Append Size Limit: 0 00:10:06.266 00:10:06.266 00:10:06.266 Active Namespaces 00:10:06.266 ================= 00:10:06.266 Namespace ID:1 00:10:06.266 Error Recovery Timeout: Unlimited 00:10:06.266 Command Set Identifier: NVM (00h) 00:10:06.266 Deallocate: Supported 00:10:06.266 Deallocated/Unwritten Error: Supported 00:10:06.266 Deallocated Read Value: All 0x00 00:10:06.266 Deallocate in Write Zeroes: Not Supported 00:10:06.266 Deallocated Guard Field: 0xFFFF 00:10:06.266 Flush: Supported 00:10:06.266 Reservation: Not Supported 00:10:06.266 Metadata Transferred as: Separate Metadata Buffer 00:10:06.266 Namespace Sharing Capabilities: Private 00:10:06.266 Size (in LBAs): 1548666 (5GiB) 00:10:06.266 Capacity (in LBAs): 1548666 (5GiB) 00:10:06.266 Utilization (in LBAs): 1548666 (5GiB) 00:10:06.266 Thin Provisioning: Not Supported 00:10:06.266 Per-NS Atomic Units: No 00:10:06.266 Maximum Single Source Range Length: 128 00:10:06.266 Maximum Copy Length: 128 00:10:06.266 Maximum Source Range Count: 128 00:10:06.266 NGUID/EUI64 Never Reused: No 00:10:06.266 Namespace Write Protected: No 00:10:06.266 Number of LBA Formats: 8 00:10:06.266 Current LBA Format: LBA Format #07 00:10:06.266 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.266 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.266 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.266 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.266 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.266 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.266 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.266 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.266 00:10:06.267 NVM Specific Namespace Data 00:10:06.267 =========================== 00:10:06.267 Logical Block Storage Tag Mask: 0 00:10:06.267 Protection Information Capabilities: 00:10:06.267 16b Guard Protection Information Storage Tag Support: No 00:10:06.267 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.267 Storage Tag Check Read Support: No 00:10:06.267 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.267 ===================================================== 00:10:06.267 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.267 ===================================================== 00:10:06.267 Controller Capabilities/Features 00:10:06.267 ================================ 00:10:06.267 Vendor ID: 1b36 00:10:06.267 Subsystem Vendor ID: 1af4 00:10:06.267 Serial Number: 12341 00:10:06.267 Model Number: QEMU NVMe Ctrl 00:10:06.267 Firmware Version: 8.0.0 00:10:06.267 Recommended Arb Burst: 6 00:10:06.267 IEEE OUI Identifier: 00 54 52 00:10:06.267 Multi-path I/O 00:10:06.267 May have multiple subsystem ports: No 00:10:06.267 May have multiple controllers: No 00:10:06.267 Associated with SR-IOV VF: No 00:10:06.267 Max Data Transfer Size: 524288 00:10:06.267 Max Number of Namespaces: 256 00:10:06.267 Max Number of I/O Queues: 64 00:10:06.267 NVMe Specification Version (VS): 1.4 00:10:06.267 NVMe Specification Version (Identify): 1.4 00:10:06.267 Maximum Queue Entries: 2048 00:10:06.267 Contiguous Queues Required: Yes 00:10:06.267 Arbitration Mechanisms Supported 00:10:06.267 Weighted Round Robin: Not Supported 00:10:06.267 Vendor Specific: Not Supported 00:10:06.267 Reset Timeout: 7500 ms 00:10:06.267 Doorbell Stride: 4 bytes 00:10:06.267 NVM Subsystem Reset: Not Supported 00:10:06.267 Command Sets Supported 00:10:06.267 NVM Command Set: Supported 00:10:06.267 Boot Partition: Not Supported 00:10:06.267 Memory Page Size Minimum: 4096 bytes 00:10:06.267 Memory Page Size Maximum: 65536 bytes 00:10:06.267 Persistent Memory Region: Not Supported 00:10:06.267 Optional Asynchronous Events Supported 00:10:06.267 Namespace Attribute Notices: Supported 00:10:06.267 Firmware Activation Notices: Not Supported 00:10:06.267 ANA Change Notices: Not Supported 00:10:06.267 PLE Aggregate Log Change Notices: Not Supported 00:10:06.267 LBA Status Info Alert Notices: Not Supported 00:10:06.267 EGE Aggregate Log Change Notices: Not Supported 00:10:06.267 Normal NVM Subsystem Shutdown event: Not Supported 00:10:06.267 Zone Descriptor Change Notices: Not Supported 00:10:06.267 Discovery Log Change Notices: Not Supported 00:10:06.267 Controller Attributes 00:10:06.267 128-bit Host Identifier: Not Supported 00:10:06.267 Non-Operational Permissive Mode: Not Supported 00:10:06.267 NVM Sets: Not Supported 00:10:06.267 Read Recovery Levels: Not Supported 00:10:06.267 Endurance Groups: Not Supported 00:10:06.267 Predictable Latency Mode: Not Supported 00:10:06.267 Traffic Based Keep ALive: Not Supported 00:10:06.267 Namespace Granularity: Not Supported 00:10:06.267 SQ Associations: Not Supported 00:10:06.267 UUID List: Not Supported 00:10:06.267 Multi-Domain Subsystem: Not Supported 00:10:06.267 Fixed Capacity Management: Not Supported 00:10:06.267 Variable Capacity Management: Not Supported 00:10:06.267 Delete Endurance Group: Not Supported 00:10:06.267 Delete NVM Set: Not Supported 00:10:06.267 Extended LBA Formats Supported: Supported 00:10:06.267 Flexible Data Placement Supported: Not Supported 00:10:06.267 00:10:06.267 Controller Memory Buffer Support 00:10:06.267 ================================ 00:10:06.267 Supported: No 00:10:06.267 00:10:06.267 Persistent Memory Region Support 00:10:06.267 ================================ 00:10:06.267 Supported: No 00:10:06.267 00:10:06.267 Admin Command Set Attributes 00:10:06.267 ============================ 00:10:06.267 Security Send/Receive: Not Supported 00:10:06.267 Format NVM: Supported 00:10:06.267 Firmware Activate/Download: Not Supported 00:10:06.267 Namespace Management: Supported 00:10:06.267 Device Self-Test: Not Supported 00:10:06.267 Directives: Supported 00:10:06.267 NVMe-MI: Not Supported 00:10:06.267 Virtualization Management: Not Supported 00:10:06.267 Doorbell Buffer Config: Supported 00:10:06.267 Get LBA Status Capability: Not Supported 00:10:06.267 Command & Feature Lockdown Capability: Not Supported 00:10:06.267 Abort Command Limit: 4 00:10:06.267 Async Event Request Limit: 4 00:10:06.267 Number of Firmware Slots: N/A 00:10:06.267 Firmware Slot 1 Read-Only: N/A 00:10:06.267 Firmware Activation Without Reset: N/A 00:10:06.267 Multiple Update Detection Support: N/A 00:10:06.267 Firmware Update Granularity: No Information Provided 00:10:06.267 Per-Namespace SMART Log: Yes 00:10:06.267 Asymmetric Namespace Access Log Page: Not Supported 00:10:06.267 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:06.267 Command Effects Log Page: Supported 00:10:06.267 Get Log Page Extended Data: Supported 00:10:06.267 Telemetry Log Pages: Not Supported 00:10:06.267 Persistent Event Log Pages: Not Supported 00:10:06.267 Supported Log Pages Log Page: May Support 00:10:06.267 Commands Supported & Effects Log Page: Not Supported 00:10:06.267 Feature Identifiers & Effects Log Page:May Support 00:10:06.267 NVMe-MI Commands & Effects Log Page: May Support 00:10:06.267 Data Area 4 for Telemetry Log: Not Supported 00:10:06.267 Error Log Page Entries Supported: 1 00:10:06.267 Keep Alive: Not Supported 00:10:06.267 00:10:06.267 NVM Command Set Attributes 00:10:06.267 ========================== 00:10:06.267 Submission Queue Entry Size 00:10:06.267 Max: 64 00:10:06.267 Min: 64 00:10:06.267 Completion Queue Entry Size 00:10:06.267 Max: 16 00:10:06.267 Min: 16 00:10:06.267 Number of Namespaces: 256 00:10:06.267 Compare Command: Supported 00:10:06.267 Write Uncorrectable Command: Not Supported 00:10:06.267 Dataset Management Command: Supported 00:10:06.267 Write Zeroes Command: Supported 00:10:06.267 Set Features Save Field: Supported 00:10:06.267 Reservations: Not Supported 00:10:06.267 Timestamp: Supported 00:10:06.267 Copy: Supported 00:10:06.267 Volatile Write Cache: Present 00:10:06.267 Atomic Write Unit (Normal): 1 00:10:06.267 Atomic Write Unit (PFail): 1 00:10:06.267 Atomic Compare & Write Unit: 1 00:10:06.267 Fused Compare & Write: Not Supported 00:10:06.267 Scatter-Gather List 00:10:06.267 SGL Command Set: Supported 00:10:06.267 SGL Keyed: Not Supported 00:10:06.267 SGL Bit Bucket Descriptor: Not Supported 00:10:06.267 SGL Metadata Pointer: Not Supported 00:10:06.267 Oversized SGL: Not Supported 00:10:06.267 SGL Metadata Address: Not Supported 00:10:06.267 SGL Offset: Not Supported 00:10:06.267 Transport SGL Data Block: Not Supported 00:10:06.267 Replay Protected Memory Block: Not Supported 00:10:06.267 00:10:06.267 Firmware Slot Information 00:10:06.267 ========================= 00:10:06.267 Active slot: 1 00:10:06.267 Slot 1 Firmware Revision: 1.0 00:10:06.267 00:10:06.267 00:10:06.267 Commands Supported and Effects 00:10:06.267 ============================== 00:10:06.267 Admin Commands 00:10:06.267 -------------- 00:10:06.267 Delete I/O Submission Queue (00h): Supported 00:10:06.267 Create I/O Submission Queue (01h): Supported 00:10:06.267 Get Log Page (02h): Supported 00:10:06.267 Delete I/O Completion Queue (04h): Supported 00:10:06.267 Create I/O Completion Queue (05h): Supported 00:10:06.267 Identify (06h): Supported 00:10:06.267 Abort (08h): Supported 00:10:06.267 Set Features (09h): Supported 00:10:06.267 Get Features (0Ah): Supported 00:10:06.267 Asynchronous Event Request (0Ch): Supported 00:10:06.267 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:06.267 Directive Send (19h): Supported 00:10:06.267 Directive Receive (1Ah): Supported 00:10:06.267 Virtualization Management (1Ch): Supported 00:10:06.267 Doorbell Buffer Config (7Ch): Supported 00:10:06.267 Format NVM (80h): Supported LBA-Change 00:10:06.267 I/O Commands 00:10:06.267 ------------ 00:10:06.267 Flush (00h): Supported LBA-Change 00:10:06.267 Write (01h): Supported LBA-Change 00:10:06.267 Read (02h): Supported 00:10:06.267 Compare (05h): Supported 00:10:06.267 Write Zeroes (08h): Supported LBA-Change 00:10:06.267 Dataset Management (09h): Supported LBA-Change 00:10:06.267 Unknown (0Ch): Supported 00:10:06.267 Unknown (12h): Supported 00:10:06.267 Copy (19h): Supported LBA-Change 00:10:06.267 Unknown (1Dh): Supported LBA-Change 00:10:06.267 00:10:06.267 Error Log 00:10:06.268 ========= 00:10:06.268 00:10:06.268 Arbitration 00:10:06.268 =========== 00:10:06.268 Arbitration Burst: no limit 00:10:06.268 00:10:06.268 Power Management 00:10:06.268 ================ 00:10:06.268 Number of Power States: 1 00:10:06.268 Current Power State: Power State #0 00:10:06.268 Power State #0: 00:10:06.268 Max Power: 25.00 W 00:10:06.268 Non-Operational State: Operational 00:10:06.268 Entry Latency: 16 microseconds 00:10:06.268 Exit Latency: 4 microseconds 00:10:06.268 Relative Read Throughput: 0 00:10:06.268 Relative Read Latency: 0 00:10:06.268 Relative Write Throughput: 0 00:10:06.268 Relative Write Latency: 0 00:10:06.268 Idle Power: Not Reported 00:10:06.268 Active Power: Not Reported 00:10:06.268 Non-Operational Permissive Mode: Not Supported 00:10:06.268 00:10:06.268 Health Information 00:10:06.268 ================== 00:10:06.268 Critical Warnings: 00:10:06.268 Available Spare Space: OK 00:10:06.268 Temperature: OK 00:10:06.268 Device Reliability: OK 00:10:06.268 Read Only: No 00:10:06.268 Volatile Memory Backup: OK 00:10:06.268 Current Temperature: 323 Kelvin (50 Celsius) 00:10:06.268 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:06.268 Available Spare: 0% 00:10:06.268 Available Spare Threshold: 0% 00:10:06.268 Life Percentage Used: 0% 00:10:06.268 Data Units Read: 1151 00:10:06.268 Data Units Written: 1018 00:10:06.268 Host Read Commands: 49729 00:10:06.268 Host Write Commands: 48522 00:10:06.268 Controller Busy Time: 0 minutes 00:10:06.268 Power Cycles: 0 00:10:06.268 Power On Hours: 0 hours 00:10:06.268 Unsafe Shutdowns: 0 00:10:06.268 Unrecoverable Media Errors: 0 00:10:06.268 Lifetime Error Log Entries: 0 00:10:06.268 Warning Temperature Time: 0 minutes 00:10:06.268 Critical Temperature Time: 0 minutes 00:10:06.268 00:10:06.268 Number of Queues 00:10:06.268 ================ 00:10:06.268 Number of I/O Submission Queues: 64 00:10:06.268 Number of I/O Completion Queues: 64 00:10:06.268 00:10:06.268 ZNS Specific Controller Data 00:10:06.268 ============================ 00:10:06.268 Zone Append Size Limit: 0 00:10:06.268 00:10:06.268 00:10:06.268 Active Namespaces 00:10:06.268 ================= 00:10:06.268 Namespace ID:1 00:10:06.268 Error Recovery Timeout: Unlimited 00:10:06.268 Command Set Identifier: NVM (00h) 00:10:06.268 Deallocate: Supported 00:10:06.268 Deallocated/Unwritten Error: Supported 00:10:06.268 Deallocated Read Value: All 0x00 00:10:06.268 Deallocate in Write Zeroes: Not Supported 00:10:06.268 Deallocated Guard Field: 0xFFFF 00:10:06.268 Flush: Supported 00:10:06.268 Reservation: Not Supported 00:10:06.268 Namespace Sharing Capabilities: Private 00:10:06.268 Size (in LBAs): 1310720 (5GiB) 00:10:06.268 Capacity (in LBAs): 1310720 (5GiB) 00:10:06.268 Utilization (in LBAs): 1310720 (5GiB) 00:10:06.268 Thin Provisioning: Not Supported 00:10:06.268 Per-NS Atomic Units: No 00:10:06.268 Maximum Single Source Range Length: 128 00:10:06.268 Maximum Copy Length: 128 00:10:06.268 Maximum Source Range Count: 128 00:10:06.268 NGUID/EUI64 Never Reused: No 00:10:06.268 Namespace Write Protected: No 00:10:06.268 Number of LBA Formats: 8 00:10:06.268 Current LBA Format: LBA Format #04 00:10:06.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.268 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.268 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.268 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.268 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.268 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.268 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.268 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.268 00:10:06.268 NVM Specific Namespace Data 00:10:06.268 =========================== 00:10:06.268 Logical Block Storage Tag Mask: 0 00:10:06.268 Protection Information Capabilities: 00:10:06.268 16b Guard Protection Information Storage Tag Support: No 00:10:06.268 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.268 Storage Tag Check Read Support: No 00:10:06.268 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.268 ===================================================== 00:10:06.268 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.268 ===================================================== 00:10:06.268 Controller Capabilities/Features 00:10:06.268 ================================ 00:10:06.268 Vendor ID: 1b36 00:10:06.268 Subsystem Vendor ID: 1af4 00:10:06.268 Serial Number: 12343 00:10:06.268 Model Number: QEMU NVMe Ctrl 00:10:06.268 Firmware Version: 8.0.0 00:10:06.268 Recommended Arb Burst: 6 00:10:06.268 IEEE OUI Identifier: 00 54 52 00:10:06.268 Multi-path I/O 00:10:06.268 May have multiple subsystem ports: No 00:10:06.268 May have multiple controllers: Yes 00:10:06.268 Associated with SR-IOV VF: No 00:10:06.268 Max Data Transfer Size: 524288 00:10:06.268 Max Number of Namespaces: 256 00:10:06.268 Max Number of I/O Queues: 64 00:10:06.268 NVMe Specification Version (VS): 1.4 00:10:06.268 NVMe Specification Version (Identify): 1.4 00:10:06.268 Maximum Queue Entries: 2048 00:10:06.268 Contiguous Queues Required: Yes 00:10:06.268 Arbitration Mechanisms Supported 00:10:06.268 Weighted Round Robin: Not Supported 00:10:06.268 Vendor Specific: Not Supported 00:10:06.268 Reset Timeout: 7500 ms 00:10:06.268 Doorbell Stride: 4 bytes 00:10:06.268 NVM Subsystem Reset: Not Supported 00:10:06.268 Command Sets Supported 00:10:06.268 NVM Command Set: Supported 00:10:06.268 Boot Partition: Not Supported 00:10:06.268 Memory Page Size Minimum: 4096 bytes 00:10:06.268 Memory Page Size Maximum: 65536 bytes 00:10:06.268 Persistent Memory Region: Not Supported 00:10:06.268 Optional Asynchronous Events Supported 00:10:06.268 Namespace Attribute Notices: Supported 00:10:06.268 Firmware Activation Notices: Not Supported 00:10:06.268 ANA Change Notices: Not Supported 00:10:06.268 PLE Aggregate Log Change Notices: Not Supported 00:10:06.268 LBA Status Info Alert Notices: Not Supported 00:10:06.268 EGE Aggregate Log Change Notices: Not Supported 00:10:06.268 Normal NVM Subsystem Shutdown event: Not Supported 00:10:06.268 Zone Descriptor Change Notices: Not Supported 00:10:06.268 Discovery Log Change Notices: Not Supported 00:10:06.268 Controller Attributes 00:10:06.268 128-bit Host Identifier: Not Supported 00:10:06.268 Non-Operational Permissive Mode: Not Supported 00:10:06.268 NVM Sets: Not Supported 00:10:06.268 Read Recovery Levels: Not Supported 00:10:06.268 Endurance Groups: Supported 00:10:06.268 Predictable Latency Mode: Not Supported 00:10:06.268 Traffic Based Keep ALive: Not Supported 00:10:06.268 Namespace Granularity: Not Supported 00:10:06.268 SQ Associations: Not Supported 00:10:06.268 UUID List: Not Supported 00:10:06.268 Multi-Domain Subsystem: Not Supported 00:10:06.268 Fixed Capacity Management: Not Supported 00:10:06.268 Variable Capacity Management: Not Supported 00:10:06.268 Delete Endurance Group: Not Supported 00:10:06.268 Delete NVM Set: Not Supported 00:10:06.268 Extended LBA Formats Supported: Supported 00:10:06.268 Flexible Data Placement Supported: Supported 00:10:06.268 00:10:06.268 Controller Memory Buffer Support 00:10:06.268 ================================ 00:10:06.268 Supported: No 00:10:06.268 00:10:06.268 Persistent Memory Region Support 00:10:06.268 ================================ 00:10:06.268 Supported: No 00:10:06.268 00:10:06.268 Admin Command Set Attributes 00:10:06.268 ============================ 00:10:06.268 Security Send/Receive: Not Supported 00:10:06.268 Format NVM: Supported 00:10:06.268 Firmware Activate/Download: Not Supported 00:10:06.268 Namespace Management: Supported 00:10:06.268 Device Self-Test: Not Supported 00:10:06.268 Directives: Supported 00:10:06.268 NVMe-MI: Not Supported 00:10:06.268 Virtualization Management: Not Supported 00:10:06.268 Doorbell Buffer Config: Supported 00:10:06.268 Get LBA Status Capability: Not Supported 00:10:06.268 Command & Feature Lockdown Capability: Not Supported 00:10:06.268 Abort Command Limit: 4 00:10:06.268 Async Event Request Limit: 4 00:10:06.268 Number of Firmware Slots: N/A 00:10:06.268 Firmware Slot 1 Read-Only: N/A 00:10:06.269 Firmware Activation Without Reset: N/A 00:10:06.269 Multiple Update Detection Support: N/A 00:10:06.269 Firmware Update Granularity: No Information Provided 00:10:06.269 Per-Namespace SMART Log: Yes 00:10:06.269 Asymmetric Namespace Access Log Page: Not Supported 00:10:06.269 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:06.269 Command Effects Log Page: Supported 00:10:06.269 Get Log Page Extended Data: Supported 00:10:06.269 Telemetry Log Pages: Not Supported 00:10:06.269 Persistent Event Log Pages: Not Supported 00:10:06.269 Supported Log Pages Log Page: May Support 00:10:06.269 Commands Supported & Effects Log Page: Not Supported 00:10:06.269 Feature Identifiers & Effects Log Page:May Support 00:10:06.269 NVMe-MI Commands & Effects Log Page: May Support 00:10:06.269 Data Area 4 for Telemetry Log: Not Supported 00:10:06.269 Error Log Page Entries Supported: 1 00:10:06.269 Keep Alive: Not Supported 00:10:06.269 00:10:06.269 NVM Command Set Attributes 00:10:06.269 ========================== 00:10:06.269 Submission Queue Entry Size 00:10:06.269 Max: 64 00:10:06.269 Min: 64 00:10:06.269 Completion Queue Entry Size 00:10:06.269 Max: 16 00:10:06.269 Min: 16 00:10:06.269 Number of Namespaces: 256 00:10:06.269 Compare Command: Supported 00:10:06.269 Write Uncorrectable Command: Not Supported 00:10:06.269 Dataset Management Command: Supported 00:10:06.269 Write Zeroes Command: Supported 00:10:06.269 Set Features Save Field: Supported 00:10:06.269 Reservations: Not Supported 00:10:06.269 Timestamp: Supported 00:10:06.269 Copy: Supported 00:10:06.269 Volatile Write Cache: Present 00:10:06.269 Atomic Write Unit (Normal): 1 00:10:06.269 Atomic Write Unit (PFail): 1 00:10:06.269 Atomic Compare & Write Unit: 1 00:10:06.269 Fused Compare & Write: Not Supported 00:10:06.269 Scatter-Gather List 00:10:06.269 SGL Command Set: Supported 00:10:06.269 SGL Keyed: Not Supported 00:10:06.269 SGL Bit Bucket Descriptor: Not Supported 00:10:06.269 SGL Metadata Pointer: Not Supported 00:10:06.269 Oversized SGL: Not Supported 00:10:06.269 SGL Metadata Address: Not Supported 00:10:06.269 SGL Offset: Not Supported 00:10:06.269 Transport SGL Data Block: Not Supported 00:10:06.269 Replay Protected Memory Block: Not Supported 00:10:06.269 00:10:06.269 Firmware Slot Information 00:10:06.269 ========================= 00:10:06.269 Active slot: 1 00:10:06.269 Slot 1 Firmware Revision: 1.0 00:10:06.269 00:10:06.269 00:10:06.269 Commands Supported and Effects 00:10:06.269 ============================== 00:10:06.269 Admin Commands 00:10:06.269 -------------- 00:10:06.269 Delete I/O Submission Queue (00h): Supported 00:10:06.269 Create I/O Submission Queue (01h): Supported 00:10:06.269 Get Log Page (02h): Supported 00:10:06.269 Delete I/O Completion Queue (04h): Supported 00:10:06.269 Create I/O Completion Queue (05h): Supported 00:10:06.269 Identify (06h): Supported 00:10:06.269 Abort (08h): Supported 00:10:06.269 Set Features (09h): Supported 00:10:06.269 Get Features (0Ah): Supported 00:10:06.269 Asynchronous Event Request (0Ch): Supported 00:10:06.269 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:06.269 Directive Send (19h): Supported 00:10:06.269 Directive Receive (1Ah): Supported 00:10:06.269 Virtualization Management (1Ch): Supported 00:10:06.269 Doorbell Buffer Config (7Ch): Supported 00:10:06.269 Format NVM (80h): Supported LBA-Change 00:10:06.269 I/O Commands 00:10:06.269 ------------ 00:10:06.269 Flush (00h): Supported LBA-Change 00:10:06.269 Write (01h): Supported LBA-Change 00:10:06.269 Read (02h): Supported 00:10:06.269 Compare (05h): Supported 00:10:06.269 Write Zeroes (08h): Supported LBA-Change 00:10:06.269 Dataset Management (09h): Supported LBA-Change 00:10:06.269 Unknown (0Ch): Supported 00:10:06.269 Unknown (12h): Supported 00:10:06.269 Copy (19h): Supported LBA-Change 00:10:06.269 Unknown (1Dh): Supported LBA-Change 00:10:06.269 00:10:06.269 Error Log 00:10:06.269 ========= 00:10:06.269 00:10:06.269 Arbitration 00:10:06.269 =========== 00:10:06.269 Arbitration Burst: no limit 00:10:06.269 00:10:06.269 Power Management 00:10:06.269 ================ 00:10:06.269 Number of Power States: 1 00:10:06.269 Current Power State: Power State #0 00:10:06.269 Power State #0: 00:10:06.269 Max Power: 25.00 W 00:10:06.269 Non-Operational State: Operational 00:10:06.269 Entry Latency: 16 microseconds 00:10:06.269 Exit Latency: 4 microseconds 00:10:06.269 Relative Read Throughput: 0 00:10:06.269 Relative Read Latency: 0 00:10:06.269 Relative Write Throughput: 0 00:10:06.269 Relative Write Latency: 0 00:10:06.269 Idle Power: Not Reported 00:10:06.269 Active Power: Not Reported 00:10:06.269 Non-Operational Permissive Mode: Not Supported 00:10:06.269 00:10:06.269 Health Information 00:10:06.269 ================== 00:10:06.269 Critical Warnings: 00:10:06.269 Available Spare Space: OK 00:10:06.269 Temperature: OK 00:10:06.269 Device Reliability: OK 00:10:06.269 Read Only: No 00:10:06.269 Volatile Memory Backup: OK 00:10:06.269 Current Temperature: 323 Kelvin (50 Celsius) 00:10:06.269 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:06.269 Available Spare: 0% 00:10:06.269 Available Spare Threshold: 0% 00:10:06.269 Life Percentage Used: [2024-11-22 08:30:41.221844] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64078 terminated unexpected 00:10:06.269 0% 00:10:06.269 Data Units Read: 968 00:10:06.269 Data Units Written: 897 00:10:06.269 Host Read Commands: 35301 00:10:06.269 Host Write Commands: 34724 00:10:06.269 Controller Busy Time: 0 minutes 00:10:06.269 Power Cycles: 0 00:10:06.269 Power On Hours: 0 hours 00:10:06.269 Unsafe Shutdowns: 0 00:10:06.269 Unrecoverable Media Errors: 0 00:10:06.269 Lifetime Error Log Entries: 0 00:10:06.269 Warning Temperature Time: 0 minutes 00:10:06.269 Critical Temperature Time: 0 minutes 00:10:06.269 00:10:06.269 Number of Queues 00:10:06.269 ================ 00:10:06.269 Number of I/O Submission Queues: 64 00:10:06.269 Number of I/O Completion Queues: 64 00:10:06.269 00:10:06.269 ZNS Specific Controller Data 00:10:06.269 ============================ 00:10:06.269 Zone Append Size Limit: 0 00:10:06.269 00:10:06.269 00:10:06.269 Active Namespaces 00:10:06.269 ================= 00:10:06.269 Namespace ID:1 00:10:06.269 Error Recovery Timeout: Unlimited 00:10:06.269 Command Set Identifier: NVM (00h) 00:10:06.269 Deallocate: Supported 00:10:06.269 Deallocated/Unwritten Error: Supported 00:10:06.269 Deallocated Read Value: All 0x00 00:10:06.269 Deallocate in Write Zeroes: Not Supported 00:10:06.269 Deallocated Guard Field: 0xFFFF 00:10:06.269 Flush: Supported 00:10:06.269 Reservation: Not Supported 00:10:06.269 Namespace Sharing Capabilities: Multiple Controllers 00:10:06.269 Size (in LBAs): 262144 (1GiB) 00:10:06.269 Capacity (in LBAs): 262144 (1GiB) 00:10:06.269 Utilization (in LBAs): 262144 (1GiB) 00:10:06.269 Thin Provisioning: Not Supported 00:10:06.269 Per-NS Atomic Units: No 00:10:06.269 Maximum Single Source Range Length: 128 00:10:06.269 Maximum Copy Length: 128 00:10:06.269 Maximum Source Range Count: 128 00:10:06.269 NGUID/EUI64 Never Reused: No 00:10:06.269 Namespace Write Protected: No 00:10:06.269 Endurance group ID: 1 00:10:06.269 Number of LBA Formats: 8 00:10:06.269 Current LBA Format: LBA Format #04 00:10:06.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.269 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.269 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.269 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.269 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.269 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.269 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.269 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.269 00:10:06.269 Get Feature FDP: 00:10:06.269 ================ 00:10:06.269 Enabled: Yes 00:10:06.269 FDP configuration index: 0 00:10:06.269 00:10:06.269 FDP configurations log page 00:10:06.269 =========================== 00:10:06.269 Number of FDP configurations: 1 00:10:06.269 Version: 0 00:10:06.269 Size: 112 00:10:06.269 FDP Configuration Descriptor: 0 00:10:06.269 Descriptor Size: 96 00:10:06.269 Reclaim Group Identifier format: 2 00:10:06.269 FDP Volatile Write Cache: Not Present 00:10:06.269 FDP Configuration: Valid 00:10:06.269 Vendor Specific Size: 0 00:10:06.269 Number of Reclaim Groups: 2 00:10:06.269 Number of Recalim Unit Handles: 8 00:10:06.269 Max Placement Identifiers: 128 00:10:06.269 Number of Namespaces Suppprted: 256 00:10:06.269 Reclaim unit Nominal Size: 6000000 bytes 00:10:06.269 Estimated Reclaim Unit Time Limit: Not Reported 00:10:06.269 RUH Desc #000: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #001: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #002: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #003: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #004: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #005: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #006: RUH Type: Initially Isolated 00:10:06.270 RUH Desc #007: RUH Type: Initially Isolated 00:10:06.270 00:10:06.270 FDP reclaim unit handle usage log page 00:10:06.270 ====================================== 00:10:06.270 Number of Reclaim Unit Handles: 8 00:10:06.270 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:06.270 RUH Usage Desc #001: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #002: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #003: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #004: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #005: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #006: RUH Attributes: Unused 00:10:06.270 RUH Usage Desc #007: RUH Attributes: Unused 00:10:06.270 00:10:06.270 FDP statistics log page 00:10:06.270 ======================= 00:10:06.270 Host bytes with metadata written: 578396160 00:10:06.270 Med[2024-11-22 08:30:41.223616] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64078 terminated unexpected 00:10:06.270 ia bytes with metadata written: 578473984 00:10:06.270 Media bytes erased: 0 00:10:06.270 00:10:06.270 FDP events log page 00:10:06.270 =================== 00:10:06.270 Number of FDP events: 0 00:10:06.270 00:10:06.270 NVM Specific Namespace Data 00:10:06.270 =========================== 00:10:06.270 Logical Block Storage Tag Mask: 0 00:10:06.270 Protection Information Capabilities: 00:10:06.270 16b Guard Protection Information Storage Tag Support: No 00:10:06.270 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.270 Storage Tag Check Read Support: No 00:10:06.270 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.270 ===================================================== 00:10:06.270 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.270 ===================================================== 00:10:06.270 Controller Capabilities/Features 00:10:06.270 ================================ 00:10:06.270 Vendor ID: 1b36 00:10:06.270 Subsystem Vendor ID: 1af4 00:10:06.270 Serial Number: 12342 00:10:06.270 Model Number: QEMU NVMe Ctrl 00:10:06.270 Firmware Version: 8.0.0 00:10:06.270 Recommended Arb Burst: 6 00:10:06.270 IEEE OUI Identifier: 00 54 52 00:10:06.270 Multi-path I/O 00:10:06.270 May have multiple subsystem ports: No 00:10:06.270 May have multiple controllers: No 00:10:06.270 Associated with SR-IOV VF: No 00:10:06.270 Max Data Transfer Size: 524288 00:10:06.270 Max Number of Namespaces: 256 00:10:06.270 Max Number of I/O Queues: 64 00:10:06.270 NVMe Specification Version (VS): 1.4 00:10:06.270 NVMe Specification Version (Identify): 1.4 00:10:06.270 Maximum Queue Entries: 2048 00:10:06.270 Contiguous Queues Required: Yes 00:10:06.270 Arbitration Mechanisms Supported 00:10:06.270 Weighted Round Robin: Not Supported 00:10:06.270 Vendor Specific: Not Supported 00:10:06.270 Reset Timeout: 7500 ms 00:10:06.270 Doorbell Stride: 4 bytes 00:10:06.270 NVM Subsystem Reset: Not Supported 00:10:06.270 Command Sets Supported 00:10:06.270 NVM Command Set: Supported 00:10:06.270 Boot Partition: Not Supported 00:10:06.270 Memory Page Size Minimum: 4096 bytes 00:10:06.270 Memory Page Size Maximum: 65536 bytes 00:10:06.270 Persistent Memory Region: Not Supported 00:10:06.270 Optional Asynchronous Events Supported 00:10:06.270 Namespace Attribute Notices: Supported 00:10:06.270 Firmware Activation Notices: Not Supported 00:10:06.270 ANA Change Notices: Not Supported 00:10:06.270 PLE Aggregate Log Change Notices: Not Supported 00:10:06.270 LBA Status Info Alert Notices: Not Supported 00:10:06.270 EGE Aggregate Log Change Notices: Not Supported 00:10:06.270 Normal NVM Subsystem Shutdown event: Not Supported 00:10:06.270 Zone Descriptor Change Notices: Not Supported 00:10:06.270 Discovery Log Change Notices: Not Supported 00:10:06.270 Controller Attributes 00:10:06.270 128-bit Host Identifier: Not Supported 00:10:06.270 Non-Operational Permissive Mode: Not Supported 00:10:06.270 NVM Sets: Not Supported 00:10:06.270 Read Recovery Levels: Not Supported 00:10:06.270 Endurance Groups: Not Supported 00:10:06.270 Predictable Latency Mode: Not Supported 00:10:06.270 Traffic Based Keep ALive: Not Supported 00:10:06.270 Namespace Granularity: Not Supported 00:10:06.270 SQ Associations: Not Supported 00:10:06.270 UUID List: Not Supported 00:10:06.270 Multi-Domain Subsystem: Not Supported 00:10:06.270 Fixed Capacity Management: Not Supported 00:10:06.270 Variable Capacity Management: Not Supported 00:10:06.270 Delete Endurance Group: Not Supported 00:10:06.270 Delete NVM Set: Not Supported 00:10:06.270 Extended LBA Formats Supported: Supported 00:10:06.270 Flexible Data Placement Supported: Not Supported 00:10:06.270 00:10:06.270 Controller Memory Buffer Support 00:10:06.270 ================================ 00:10:06.270 Supported: No 00:10:06.270 00:10:06.270 Persistent Memory Region Support 00:10:06.270 ================================ 00:10:06.270 Supported: No 00:10:06.270 00:10:06.270 Admin Command Set Attributes 00:10:06.270 ============================ 00:10:06.270 Security Send/Receive: Not Supported 00:10:06.270 Format NVM: Supported 00:10:06.270 Firmware Activate/Download: Not Supported 00:10:06.270 Namespace Management: Supported 00:10:06.270 Device Self-Test: Not Supported 00:10:06.270 Directives: Supported 00:10:06.270 NVMe-MI: Not Supported 00:10:06.270 Virtualization Management: Not Supported 00:10:06.270 Doorbell Buffer Config: Supported 00:10:06.270 Get LBA Status Capability: Not Supported 00:10:06.270 Command & Feature Lockdown Capability: Not Supported 00:10:06.270 Abort Command Limit: 4 00:10:06.270 Async Event Request Limit: 4 00:10:06.270 Number of Firmware Slots: N/A 00:10:06.270 Firmware Slot 1 Read-Only: N/A 00:10:06.270 Firmware Activation Without Reset: N/A 00:10:06.270 Multiple Update Detection Support: N/A 00:10:06.270 Firmware Update Granularity: No Information Provided 00:10:06.270 Per-Namespace SMART Log: Yes 00:10:06.270 Asymmetric Namespace Access Log Page: Not Supported 00:10:06.270 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:06.270 Command Effects Log Page: Supported 00:10:06.270 Get Log Page Extended Data: Supported 00:10:06.270 Telemetry Log Pages: Not Supported 00:10:06.270 Persistent Event Log Pages: Not Supported 00:10:06.270 Supported Log Pages Log Page: May Support 00:10:06.270 Commands Supported & Effects Log Page: Not Supported 00:10:06.270 Feature Identifiers & Effects Log Page:May Support 00:10:06.270 NVMe-MI Commands & Effects Log Page: May Support 00:10:06.270 Data Area 4 for Telemetry Log: Not Supported 00:10:06.270 Error Log Page Entries Supported: 1 00:10:06.270 Keep Alive: Not Supported 00:10:06.270 00:10:06.270 NVM Command Set Attributes 00:10:06.270 ========================== 00:10:06.270 Submission Queue Entry Size 00:10:06.270 Max: 64 00:10:06.270 Min: 64 00:10:06.271 Completion Queue Entry Size 00:10:06.271 Max: 16 00:10:06.271 Min: 16 00:10:06.271 Number of Namespaces: 256 00:10:06.271 Compare Command: Supported 00:10:06.271 Write Uncorrectable Command: Not Supported 00:10:06.271 Dataset Management Command: Supported 00:10:06.271 Write Zeroes Command: Supported 00:10:06.271 Set Features Save Field: Supported 00:10:06.271 Reservations: Not Supported 00:10:06.271 Timestamp: Supported 00:10:06.271 Copy: Supported 00:10:06.271 Volatile Write Cache: Present 00:10:06.271 Atomic Write Unit (Normal): 1 00:10:06.271 Atomic Write Unit (PFail): 1 00:10:06.271 Atomic Compare & Write Unit: 1 00:10:06.271 Fused Compare & Write: Not Supported 00:10:06.271 Scatter-Gather List 00:10:06.271 SGL Command Set: Supported 00:10:06.271 SGL Keyed: Not Supported 00:10:06.271 SGL Bit Bucket Descriptor: Not Supported 00:10:06.271 SGL Metadata Pointer: Not Supported 00:10:06.271 Oversized SGL: Not Supported 00:10:06.271 SGL Metadata Address: Not Supported 00:10:06.271 SGL Offset: Not Supported 00:10:06.271 Transport SGL Data Block: Not Supported 00:10:06.271 Replay Protected Memory Block: Not Supported 00:10:06.271 00:10:06.271 Firmware Slot Information 00:10:06.271 ========================= 00:10:06.271 Active slot: 1 00:10:06.271 Slot 1 Firmware Revision: 1.0 00:10:06.271 00:10:06.271 00:10:06.271 Commands Supported and Effects 00:10:06.271 ============================== 00:10:06.271 Admin Commands 00:10:06.271 -------------- 00:10:06.271 Delete I/O Submission Queue (00h): Supported 00:10:06.271 Create I/O Submission Queue (01h): Supported 00:10:06.271 Get Log Page (02h): Supported 00:10:06.271 Delete I/O Completion Queue (04h): Supported 00:10:06.271 Create I/O Completion Queue (05h): Supported 00:10:06.271 Identify (06h): Supported 00:10:06.271 Abort (08h): Supported 00:10:06.271 Set Features (09h): Supported 00:10:06.271 Get Features (0Ah): Supported 00:10:06.271 Asynchronous Event Request (0Ch): Supported 00:10:06.271 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:06.271 Directive Send (19h): Supported 00:10:06.271 Directive Receive (1Ah): Supported 00:10:06.271 Virtualization Management (1Ch): Supported 00:10:06.271 Doorbell Buffer Config (7Ch): Supported 00:10:06.271 Format NVM (80h): Supported LBA-Change 00:10:06.271 I/O Commands 00:10:06.271 ------------ 00:10:06.271 Flush (00h): Supported LBA-Change 00:10:06.271 Write (01h): Supported LBA-Change 00:10:06.271 Read (02h): Supported 00:10:06.271 Compare (05h): Supported 00:10:06.271 Write Zeroes (08h): Supported LBA-Change 00:10:06.271 Dataset Management (09h): Supported LBA-Change 00:10:06.271 Unknown (0Ch): Supported 00:10:06.271 Unknown (12h): Supported 00:10:06.271 Copy (19h): Supported LBA-Change 00:10:06.271 Unknown (1Dh): Supported LBA-Change 00:10:06.271 00:10:06.271 Error Log 00:10:06.271 ========= 00:10:06.271 00:10:06.271 Arbitration 00:10:06.271 =========== 00:10:06.271 Arbitration Burst: no limit 00:10:06.271 00:10:06.271 Power Management 00:10:06.271 ================ 00:10:06.271 Number of Power States: 1 00:10:06.271 Current Power State: Power State #0 00:10:06.271 Power State #0: 00:10:06.271 Max Power: 25.00 W 00:10:06.271 Non-Operational State: Operational 00:10:06.271 Entry Latency: 16 microseconds 00:10:06.271 Exit Latency: 4 microseconds 00:10:06.271 Relative Read Throughput: 0 00:10:06.271 Relative Read Latency: 0 00:10:06.271 Relative Write Throughput: 0 00:10:06.271 Relative Write Latency: 0 00:10:06.271 Idle Power: Not Reported 00:10:06.271 Active Power: Not Reported 00:10:06.271 Non-Operational Permissive Mode: Not Supported 00:10:06.271 00:10:06.271 Health Information 00:10:06.271 ================== 00:10:06.271 Critical Warnings: 00:10:06.271 Available Spare Space: OK 00:10:06.271 Temperature: OK 00:10:06.271 Device Reliability: OK 00:10:06.271 Read Only: No 00:10:06.271 Volatile Memory Backup: OK 00:10:06.271 Current Temperature: 323 Kelvin (50 Celsius) 00:10:06.271 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:06.271 Available Spare: 0% 00:10:06.271 Available Spare Threshold: 0% 00:10:06.271 Life Percentage Used: 0% 00:10:06.271 Data Units Read: 2452 00:10:06.271 Data Units Written: 2239 00:10:06.271 Host Read Commands: 102376 00:10:06.271 Host Write Commands: 100646 00:10:06.271 Controller Busy Time: 0 minutes 00:10:06.271 Power Cycles: 0 00:10:06.271 Power On Hours: 0 hours 00:10:06.271 Unsafe Shutdowns: 0 00:10:06.271 Unrecoverable Media Errors: 0 00:10:06.271 Lifetime Error Log Entries: 0 00:10:06.271 Warning Temperature Time: 0 minutes 00:10:06.271 Critical Temperature Time: 0 minutes 00:10:06.271 00:10:06.271 Number of Queues 00:10:06.271 ================ 00:10:06.271 Number of I/O Submission Queues: 64 00:10:06.271 Number of I/O Completion Queues: 64 00:10:06.271 00:10:06.271 ZNS Specific Controller Data 00:10:06.271 ============================ 00:10:06.271 Zone Append Size Limit: 0 00:10:06.271 00:10:06.271 00:10:06.271 Active Namespaces 00:10:06.271 ================= 00:10:06.271 Namespace ID:1 00:10:06.271 Error Recovery Timeout: Unlimited 00:10:06.271 Command Set Identifier: NVM (00h) 00:10:06.271 Deallocate: Supported 00:10:06.271 Deallocated/Unwritten Error: Supported 00:10:06.271 Deallocated Read Value: All 0x00 00:10:06.271 Deallocate in Write Zeroes: Not Supported 00:10:06.271 Deallocated Guard Field: 0xFFFF 00:10:06.271 Flush: Supported 00:10:06.271 Reservation: Not Supported 00:10:06.271 Namespace Sharing Capabilities: Private 00:10:06.271 Size (in LBAs): 1048576 (4GiB) 00:10:06.271 Capacity (in LBAs): 1048576 (4GiB) 00:10:06.271 Utilization (in LBAs): 1048576 (4GiB) 00:10:06.271 Thin Provisioning: Not Supported 00:10:06.271 Per-NS Atomic Units: No 00:10:06.271 Maximum Single Source Range Length: 128 00:10:06.271 Maximum Copy Length: 128 00:10:06.271 Maximum Source Range Count: 128 00:10:06.271 NGUID/EUI64 Never Reused: No 00:10:06.271 Namespace Write Protected: No 00:10:06.271 Number of LBA Formats: 8 00:10:06.271 Current LBA Format: LBA Format #04 00:10:06.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.271 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.271 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.271 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.271 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.271 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.271 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.271 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.271 00:10:06.271 NVM Specific Namespace Data 00:10:06.271 =========================== 00:10:06.271 Logical Block Storage Tag Mask: 0 00:10:06.271 Protection Information Capabilities: 00:10:06.271 16b Guard Protection Information Storage Tag Support: No 00:10:06.271 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.271 Storage Tag Check Read Support: No 00:10:06.271 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.271 Namespace ID:2 00:10:06.271 Error Recovery Timeout: Unlimited 00:10:06.271 Command Set Identifier: NVM (00h) 00:10:06.271 Deallocate: Supported 00:10:06.271 Deallocated/Unwritten Error: Supported 00:10:06.271 Deallocated Read Value: All 0x00 00:10:06.271 Deallocate in Write Zeroes: Not Supported 00:10:06.271 Deallocated Guard Field: 0xFFFF 00:10:06.271 Flush: Supported 00:10:06.271 Reservation: Not Supported 00:10:06.271 Namespace Sharing Capabilities: Private 00:10:06.271 Size (in LBAs): 1048576 (4GiB) 00:10:06.271 Capacity (in LBAs): 1048576 (4GiB) 00:10:06.271 Utilization (in LBAs): 1048576 (4GiB) 00:10:06.271 Thin Provisioning: Not Supported 00:10:06.271 Per-NS Atomic Units: No 00:10:06.271 Maximum Single Source Range Length: 128 00:10:06.271 Maximum Copy Length: 128 00:10:06.271 Maximum Source Range Count: 128 00:10:06.271 NGUID/EUI64 Never Reused: No 00:10:06.271 Namespace Write Protected: No 00:10:06.271 Number of LBA Formats: 8 00:10:06.271 Current LBA Format: LBA Format #04 00:10:06.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.271 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.271 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.271 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.271 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.272 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.272 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.272 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.272 00:10:06.272 NVM Specific Namespace Data 00:10:06.272 =========================== 00:10:06.272 Logical Block Storage Tag Mask: 0 00:10:06.272 Protection Information Capabilities: 00:10:06.272 16b Guard Protection Information Storage Tag Support: No 00:10:06.272 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.272 Storage Tag Check Read Support: No 00:10:06.272 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Namespace ID:3 00:10:06.272 Error Recovery Timeout: Unlimited 00:10:06.272 Command Set Identifier: NVM (00h) 00:10:06.272 Deallocate: Supported 00:10:06.272 Deallocated/Unwritten Error: Supported 00:10:06.272 Deallocated Read Value: All 0x00 00:10:06.272 Deallocate in Write Zeroes: Not Supported 00:10:06.272 Deallocated Guard Field: 0xFFFF 00:10:06.272 Flush: Supported 00:10:06.272 Reservation: Not Supported 00:10:06.272 Namespace Sharing Capabilities: Private 00:10:06.272 Size (in LBAs): 1048576 (4GiB) 00:10:06.272 Capacity (in LBAs): 1048576 (4GiB) 00:10:06.272 Utilization (in LBAs): 1048576 (4GiB) 00:10:06.272 Thin Provisioning: Not Supported 00:10:06.272 Per-NS Atomic Units: No 00:10:06.272 Maximum Single Source Range Length: 128 00:10:06.272 Maximum Copy Length: 128 00:10:06.272 Maximum Source Range Count: 128 00:10:06.272 NGUID/EUI64 Never Reused: No 00:10:06.272 Namespace Write Protected: No 00:10:06.272 Number of LBA Formats: 8 00:10:06.272 Current LBA Format: LBA Format #04 00:10:06.272 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.272 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.272 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.272 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.272 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.272 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.272 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.272 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.272 00:10:06.272 NVM Specific Namespace Data 00:10:06.272 =========================== 00:10:06.272 Logical Block Storage Tag Mask: 0 00:10:06.272 Protection Information Capabilities: 00:10:06.272 16b Guard Protection Information Storage Tag Support: No 00:10:06.272 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.272 Storage Tag Check Read Support: No 00:10:06.272 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.272 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:06.272 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:06.532 ===================================================== 00:10:06.532 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.532 ===================================================== 00:10:06.532 Controller Capabilities/Features 00:10:06.532 ================================ 00:10:06.532 Vendor ID: 1b36 00:10:06.532 Subsystem Vendor ID: 1af4 00:10:06.532 Serial Number: 12340 00:10:06.532 Model Number: QEMU NVMe Ctrl 00:10:06.532 Firmware Version: 8.0.0 00:10:06.532 Recommended Arb Burst: 6 00:10:06.533 IEEE OUI Identifier: 00 54 52 00:10:06.533 Multi-path I/O 00:10:06.533 May have multiple subsystem ports: No 00:10:06.533 May have multiple controllers: No 00:10:06.533 Associated with SR-IOV VF: No 00:10:06.533 Max Data Transfer Size: 524288 00:10:06.533 Max Number of Namespaces: 256 00:10:06.533 Max Number of I/O Queues: 64 00:10:06.533 NVMe Specification Version (VS): 1.4 00:10:06.533 NVMe Specification Version (Identify): 1.4 00:10:06.533 Maximum Queue Entries: 2048 00:10:06.533 Contiguous Queues Required: Yes 00:10:06.533 Arbitration Mechanisms Supported 00:10:06.533 Weighted Round Robin: Not Supported 00:10:06.533 Vendor Specific: Not Supported 00:10:06.533 Reset Timeout: 7500 ms 00:10:06.533 Doorbell Stride: 4 bytes 00:10:06.533 NVM Subsystem Reset: Not Supported 00:10:06.533 Command Sets Supported 00:10:06.533 NVM Command Set: Supported 00:10:06.533 Boot Partition: Not Supported 00:10:06.533 Memory Page Size Minimum: 4096 bytes 00:10:06.533 Memory Page Size Maximum: 65536 bytes 00:10:06.533 Persistent Memory Region: Not Supported 00:10:06.533 Optional Asynchronous Events Supported 00:10:06.533 Namespace Attribute Notices: Supported 00:10:06.533 Firmware Activation Notices: Not Supported 00:10:06.533 ANA Change Notices: Not Supported 00:10:06.533 PLE Aggregate Log Change Notices: Not Supported 00:10:06.533 LBA Status Info Alert Notices: Not Supported 00:10:06.533 EGE Aggregate Log Change Notices: Not Supported 00:10:06.533 Normal NVM Subsystem Shutdown event: Not Supported 00:10:06.533 Zone Descriptor Change Notices: Not Supported 00:10:06.533 Discovery Log Change Notices: Not Supported 00:10:06.533 Controller Attributes 00:10:06.533 128-bit Host Identifier: Not Supported 00:10:06.533 Non-Operational Permissive Mode: Not Supported 00:10:06.533 NVM Sets: Not Supported 00:10:06.533 Read Recovery Levels: Not Supported 00:10:06.533 Endurance Groups: Not Supported 00:10:06.533 Predictable Latency Mode: Not Supported 00:10:06.533 Traffic Based Keep ALive: Not Supported 00:10:06.533 Namespace Granularity: Not Supported 00:10:06.533 SQ Associations: Not Supported 00:10:06.533 UUID List: Not Supported 00:10:06.533 Multi-Domain Subsystem: Not Supported 00:10:06.533 Fixed Capacity Management: Not Supported 00:10:06.533 Variable Capacity Management: Not Supported 00:10:06.533 Delete Endurance Group: Not Supported 00:10:06.533 Delete NVM Set: Not Supported 00:10:06.533 Extended LBA Formats Supported: Supported 00:10:06.533 Flexible Data Placement Supported: Not Supported 00:10:06.533 00:10:06.533 Controller Memory Buffer Support 00:10:06.533 ================================ 00:10:06.533 Supported: No 00:10:06.533 00:10:06.533 Persistent Memory Region Support 00:10:06.533 ================================ 00:10:06.533 Supported: No 00:10:06.533 00:10:06.533 Admin Command Set Attributes 00:10:06.533 ============================ 00:10:06.533 Security Send/Receive: Not Supported 00:10:06.533 Format NVM: Supported 00:10:06.533 Firmware Activate/Download: Not Supported 00:10:06.533 Namespace Management: Supported 00:10:06.533 Device Self-Test: Not Supported 00:10:06.533 Directives: Supported 00:10:06.533 NVMe-MI: Not Supported 00:10:06.533 Virtualization Management: Not Supported 00:10:06.533 Doorbell Buffer Config: Supported 00:10:06.533 Get LBA Status Capability: Not Supported 00:10:06.533 Command & Feature Lockdown Capability: Not Supported 00:10:06.533 Abort Command Limit: 4 00:10:06.533 Async Event Request Limit: 4 00:10:06.533 Number of Firmware Slots: N/A 00:10:06.533 Firmware Slot 1 Read-Only: N/A 00:10:06.533 Firmware Activation Without Reset: N/A 00:10:06.533 Multiple Update Detection Support: N/A 00:10:06.533 Firmware Update Granularity: No Information Provided 00:10:06.533 Per-Namespace SMART Log: Yes 00:10:06.533 Asymmetric Namespace Access Log Page: Not Supported 00:10:06.533 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:06.533 Command Effects Log Page: Supported 00:10:06.533 Get Log Page Extended Data: Supported 00:10:06.533 Telemetry Log Pages: Not Supported 00:10:06.533 Persistent Event Log Pages: Not Supported 00:10:06.533 Supported Log Pages Log Page: May Support 00:10:06.533 Commands Supported & Effects Log Page: Not Supported 00:10:06.533 Feature Identifiers & Effects Log Page:May Support 00:10:06.533 NVMe-MI Commands & Effects Log Page: May Support 00:10:06.533 Data Area 4 for Telemetry Log: Not Supported 00:10:06.533 Error Log Page Entries Supported: 1 00:10:06.533 Keep Alive: Not Supported 00:10:06.533 00:10:06.533 NVM Command Set Attributes 00:10:06.533 ========================== 00:10:06.533 Submission Queue Entry Size 00:10:06.533 Max: 64 00:10:06.533 Min: 64 00:10:06.533 Completion Queue Entry Size 00:10:06.533 Max: 16 00:10:06.533 Min: 16 00:10:06.533 Number of Namespaces: 256 00:10:06.533 Compare Command: Supported 00:10:06.533 Write Uncorrectable Command: Not Supported 00:10:06.533 Dataset Management Command: Supported 00:10:06.533 Write Zeroes Command: Supported 00:10:06.533 Set Features Save Field: Supported 00:10:06.533 Reservations: Not Supported 00:10:06.533 Timestamp: Supported 00:10:06.533 Copy: Supported 00:10:06.533 Volatile Write Cache: Present 00:10:06.533 Atomic Write Unit (Normal): 1 00:10:06.533 Atomic Write Unit (PFail): 1 00:10:06.533 Atomic Compare & Write Unit: 1 00:10:06.533 Fused Compare & Write: Not Supported 00:10:06.533 Scatter-Gather List 00:10:06.533 SGL Command Set: Supported 00:10:06.533 SGL Keyed: Not Supported 00:10:06.533 SGL Bit Bucket Descriptor: Not Supported 00:10:06.533 SGL Metadata Pointer: Not Supported 00:10:06.533 Oversized SGL: Not Supported 00:10:06.533 SGL Metadata Address: Not Supported 00:10:06.533 SGL Offset: Not Supported 00:10:06.533 Transport SGL Data Block: Not Supported 00:10:06.533 Replay Protected Memory Block: Not Supported 00:10:06.533 00:10:06.533 Firmware Slot Information 00:10:06.533 ========================= 00:10:06.533 Active slot: 1 00:10:06.533 Slot 1 Firmware Revision: 1.0 00:10:06.533 00:10:06.533 00:10:06.533 Commands Supported and Effects 00:10:06.533 ============================== 00:10:06.533 Admin Commands 00:10:06.533 -------------- 00:10:06.533 Delete I/O Submission Queue (00h): Supported 00:10:06.533 Create I/O Submission Queue (01h): Supported 00:10:06.533 Get Log Page (02h): Supported 00:10:06.533 Delete I/O Completion Queue (04h): Supported 00:10:06.533 Create I/O Completion Queue (05h): Supported 00:10:06.533 Identify (06h): Supported 00:10:06.533 Abort (08h): Supported 00:10:06.533 Set Features (09h): Supported 00:10:06.533 Get Features (0Ah): Supported 00:10:06.533 Asynchronous Event Request (0Ch): Supported 00:10:06.533 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:06.533 Directive Send (19h): Supported 00:10:06.533 Directive Receive (1Ah): Supported 00:10:06.533 Virtualization Management (1Ch): Supported 00:10:06.533 Doorbell Buffer Config (7Ch): Supported 00:10:06.533 Format NVM (80h): Supported LBA-Change 00:10:06.533 I/O Commands 00:10:06.533 ------------ 00:10:06.533 Flush (00h): Supported LBA-Change 00:10:06.533 Write (01h): Supported LBA-Change 00:10:06.533 Read (02h): Supported 00:10:06.533 Compare (05h): Supported 00:10:06.533 Write Zeroes (08h): Supported LBA-Change 00:10:06.533 Dataset Management (09h): Supported LBA-Change 00:10:06.533 Unknown (0Ch): Supported 00:10:06.533 Unknown (12h): Supported 00:10:06.533 Copy (19h): Supported LBA-Change 00:10:06.533 Unknown (1Dh): Supported LBA-Change 00:10:06.533 00:10:06.533 Error Log 00:10:06.533 ========= 00:10:06.533 00:10:06.533 Arbitration 00:10:06.533 =========== 00:10:06.533 Arbitration Burst: no limit 00:10:06.533 00:10:06.533 Power Management 00:10:06.533 ================ 00:10:06.533 Number of Power States: 1 00:10:06.533 Current Power State: Power State #0 00:10:06.533 Power State #0: 00:10:06.533 Max Power: 25.00 W 00:10:06.533 Non-Operational State: Operational 00:10:06.533 Entry Latency: 16 microseconds 00:10:06.533 Exit Latency: 4 microseconds 00:10:06.533 Relative Read Throughput: 0 00:10:06.533 Relative Read Latency: 0 00:10:06.533 Relative Write Throughput: 0 00:10:06.533 Relative Write Latency: 0 00:10:06.533 Idle Power: Not Reported 00:10:06.533 Active Power: Not Reported 00:10:06.533 Non-Operational Permissive Mode: Not Supported 00:10:06.533 00:10:06.533 Health Information 00:10:06.533 ================== 00:10:06.533 Critical Warnings: 00:10:06.533 Available Spare Space: OK 00:10:06.533 Temperature: OK 00:10:06.533 Device Reliability: OK 00:10:06.533 Read Only: No 00:10:06.533 Volatile Memory Backup: OK 00:10:06.533 Current Temperature: 323 Kelvin (50 Celsius) 00:10:06.533 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:06.533 Available Spare: 0% 00:10:06.534 Available Spare Threshold: 0% 00:10:06.534 Life Percentage Used: 0% 00:10:06.534 Data Units Read: 737 00:10:06.534 Data Units Written: 665 00:10:06.534 Host Read Commands: 33073 00:10:06.534 Host Write Commands: 32859 00:10:06.534 Controller Busy Time: 0 minutes 00:10:06.534 Power Cycles: 0 00:10:06.534 Power On Hours: 0 hours 00:10:06.534 Unsafe Shutdowns: 0 00:10:06.534 Unrecoverable Media Errors: 0 00:10:06.534 Lifetime Error Log Entries: 0 00:10:06.534 Warning Temperature Time: 0 minutes 00:10:06.534 Critical Temperature Time: 0 minutes 00:10:06.534 00:10:06.534 Number of Queues 00:10:06.534 ================ 00:10:06.534 Number of I/O Submission Queues: 64 00:10:06.534 Number of I/O Completion Queues: 64 00:10:06.534 00:10:06.534 ZNS Specific Controller Data 00:10:06.534 ============================ 00:10:06.534 Zone Append Size Limit: 0 00:10:06.534 00:10:06.534 00:10:06.534 Active Namespaces 00:10:06.534 ================= 00:10:06.534 Namespace ID:1 00:10:06.534 Error Recovery Timeout: Unlimited 00:10:06.534 Command Set Identifier: NVM (00h) 00:10:06.534 Deallocate: Supported 00:10:06.534 Deallocated/Unwritten Error: Supported 00:10:06.534 Deallocated Read Value: All 0x00 00:10:06.534 Deallocate in Write Zeroes: Not Supported 00:10:06.534 Deallocated Guard Field: 0xFFFF 00:10:06.534 Flush: Supported 00:10:06.534 Reservation: Not Supported 00:10:06.534 Metadata Transferred as: Separate Metadata Buffer 00:10:06.534 Namespace Sharing Capabilities: Private 00:10:06.534 Size (in LBAs): 1548666 (5GiB) 00:10:06.534 Capacity (in LBAs): 1548666 (5GiB) 00:10:06.534 Utilization (in LBAs): 1548666 (5GiB) 00:10:06.534 Thin Provisioning: Not Supported 00:10:06.534 Per-NS Atomic Units: No 00:10:06.534 Maximum Single Source Range Length: 128 00:10:06.534 Maximum Copy Length: 128 00:10:06.534 Maximum Source Range Count: 128 00:10:06.534 NGUID/EUI64 Never Reused: No 00:10:06.534 Namespace Write Protected: No 00:10:06.534 Number of LBA Formats: 8 00:10:06.534 Current LBA Format: LBA Format #07 00:10:06.534 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:06.534 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:06.534 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:06.534 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:06.534 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:06.534 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:06.534 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:06.534 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:06.534 00:10:06.534 NVM Specific Namespace Data 00:10:06.534 =========================== 00:10:06.534 Logical Block Storage Tag Mask: 0 00:10:06.534 Protection Information Capabilities: 00:10:06.534 16b Guard Protection Information Storage Tag Support: No 00:10:06.534 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:06.534 Storage Tag Check Read Support: No 00:10:06.534 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.534 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:06.793 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:06.793 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:07.054 ===================================================== 00:10:07.054 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:07.054 ===================================================== 00:10:07.054 Controller Capabilities/Features 00:10:07.054 ================================ 00:10:07.054 Vendor ID: 1b36 00:10:07.054 Subsystem Vendor ID: 1af4 00:10:07.054 Serial Number: 12341 00:10:07.054 Model Number: QEMU NVMe Ctrl 00:10:07.054 Firmware Version: 8.0.0 00:10:07.054 Recommended Arb Burst: 6 00:10:07.054 IEEE OUI Identifier: 00 54 52 00:10:07.054 Multi-path I/O 00:10:07.054 May have multiple subsystem ports: No 00:10:07.054 May have multiple controllers: No 00:10:07.054 Associated with SR-IOV VF: No 00:10:07.054 Max Data Transfer Size: 524288 00:10:07.054 Max Number of Namespaces: 256 00:10:07.054 Max Number of I/O Queues: 64 00:10:07.054 NVMe Specification Version (VS): 1.4 00:10:07.054 NVMe Specification Version (Identify): 1.4 00:10:07.054 Maximum Queue Entries: 2048 00:10:07.054 Contiguous Queues Required: Yes 00:10:07.054 Arbitration Mechanisms Supported 00:10:07.054 Weighted Round Robin: Not Supported 00:10:07.054 Vendor Specific: Not Supported 00:10:07.054 Reset Timeout: 7500 ms 00:10:07.054 Doorbell Stride: 4 bytes 00:10:07.054 NVM Subsystem Reset: Not Supported 00:10:07.054 Command Sets Supported 00:10:07.054 NVM Command Set: Supported 00:10:07.054 Boot Partition: Not Supported 00:10:07.054 Memory Page Size Minimum: 4096 bytes 00:10:07.054 Memory Page Size Maximum: 65536 bytes 00:10:07.054 Persistent Memory Region: Not Supported 00:10:07.054 Optional Asynchronous Events Supported 00:10:07.054 Namespace Attribute Notices: Supported 00:10:07.054 Firmware Activation Notices: Not Supported 00:10:07.054 ANA Change Notices: Not Supported 00:10:07.054 PLE Aggregate Log Change Notices: Not Supported 00:10:07.054 LBA Status Info Alert Notices: Not Supported 00:10:07.054 EGE Aggregate Log Change Notices: Not Supported 00:10:07.054 Normal NVM Subsystem Shutdown event: Not Supported 00:10:07.054 Zone Descriptor Change Notices: Not Supported 00:10:07.054 Discovery Log Change Notices: Not Supported 00:10:07.054 Controller Attributes 00:10:07.054 128-bit Host Identifier: Not Supported 00:10:07.054 Non-Operational Permissive Mode: Not Supported 00:10:07.054 NVM Sets: Not Supported 00:10:07.054 Read Recovery Levels: Not Supported 00:10:07.054 Endurance Groups: Not Supported 00:10:07.054 Predictable Latency Mode: Not Supported 00:10:07.054 Traffic Based Keep ALive: Not Supported 00:10:07.054 Namespace Granularity: Not Supported 00:10:07.054 SQ Associations: Not Supported 00:10:07.054 UUID List: Not Supported 00:10:07.054 Multi-Domain Subsystem: Not Supported 00:10:07.054 Fixed Capacity Management: Not Supported 00:10:07.054 Variable Capacity Management: Not Supported 00:10:07.054 Delete Endurance Group: Not Supported 00:10:07.054 Delete NVM Set: Not Supported 00:10:07.054 Extended LBA Formats Supported: Supported 00:10:07.054 Flexible Data Placement Supported: Not Supported 00:10:07.054 00:10:07.054 Controller Memory Buffer Support 00:10:07.054 ================================ 00:10:07.054 Supported: No 00:10:07.054 00:10:07.054 Persistent Memory Region Support 00:10:07.054 ================================ 00:10:07.054 Supported: No 00:10:07.054 00:10:07.054 Admin Command Set Attributes 00:10:07.054 ============================ 00:10:07.054 Security Send/Receive: Not Supported 00:10:07.054 Format NVM: Supported 00:10:07.055 Firmware Activate/Download: Not Supported 00:10:07.055 Namespace Management: Supported 00:10:07.055 Device Self-Test: Not Supported 00:10:07.055 Directives: Supported 00:10:07.055 NVMe-MI: Not Supported 00:10:07.055 Virtualization Management: Not Supported 00:10:07.055 Doorbell Buffer Config: Supported 00:10:07.055 Get LBA Status Capability: Not Supported 00:10:07.055 Command & Feature Lockdown Capability: Not Supported 00:10:07.055 Abort Command Limit: 4 00:10:07.055 Async Event Request Limit: 4 00:10:07.055 Number of Firmware Slots: N/A 00:10:07.055 Firmware Slot 1 Read-Only: N/A 00:10:07.055 Firmware Activation Without Reset: N/A 00:10:07.055 Multiple Update Detection Support: N/A 00:10:07.055 Firmware Update Granularity: No Information Provided 00:10:07.055 Per-Namespace SMART Log: Yes 00:10:07.055 Asymmetric Namespace Access Log Page: Not Supported 00:10:07.055 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:07.055 Command Effects Log Page: Supported 00:10:07.055 Get Log Page Extended Data: Supported 00:10:07.055 Telemetry Log Pages: Not Supported 00:10:07.055 Persistent Event Log Pages: Not Supported 00:10:07.055 Supported Log Pages Log Page: May Support 00:10:07.055 Commands Supported & Effects Log Page: Not Supported 00:10:07.055 Feature Identifiers & Effects Log Page:May Support 00:10:07.055 NVMe-MI Commands & Effects Log Page: May Support 00:10:07.055 Data Area 4 for Telemetry Log: Not Supported 00:10:07.055 Error Log Page Entries Supported: 1 00:10:07.055 Keep Alive: Not Supported 00:10:07.055 00:10:07.055 NVM Command Set Attributes 00:10:07.055 ========================== 00:10:07.055 Submission Queue Entry Size 00:10:07.055 Max: 64 00:10:07.055 Min: 64 00:10:07.055 Completion Queue Entry Size 00:10:07.055 Max: 16 00:10:07.055 Min: 16 00:10:07.055 Number of Namespaces: 256 00:10:07.055 Compare Command: Supported 00:10:07.055 Write Uncorrectable Command: Not Supported 00:10:07.055 Dataset Management Command: Supported 00:10:07.055 Write Zeroes Command: Supported 00:10:07.055 Set Features Save Field: Supported 00:10:07.055 Reservations: Not Supported 00:10:07.055 Timestamp: Supported 00:10:07.055 Copy: Supported 00:10:07.055 Volatile Write Cache: Present 00:10:07.055 Atomic Write Unit (Normal): 1 00:10:07.055 Atomic Write Unit (PFail): 1 00:10:07.055 Atomic Compare & Write Unit: 1 00:10:07.055 Fused Compare & Write: Not Supported 00:10:07.055 Scatter-Gather List 00:10:07.055 SGL Command Set: Supported 00:10:07.055 SGL Keyed: Not Supported 00:10:07.055 SGL Bit Bucket Descriptor: Not Supported 00:10:07.055 SGL Metadata Pointer: Not Supported 00:10:07.055 Oversized SGL: Not Supported 00:10:07.055 SGL Metadata Address: Not Supported 00:10:07.055 SGL Offset: Not Supported 00:10:07.055 Transport SGL Data Block: Not Supported 00:10:07.055 Replay Protected Memory Block: Not Supported 00:10:07.055 00:10:07.055 Firmware Slot Information 00:10:07.055 ========================= 00:10:07.055 Active slot: 1 00:10:07.055 Slot 1 Firmware Revision: 1.0 00:10:07.055 00:10:07.055 00:10:07.055 Commands Supported and Effects 00:10:07.055 ============================== 00:10:07.055 Admin Commands 00:10:07.055 -------------- 00:10:07.055 Delete I/O Submission Queue (00h): Supported 00:10:07.055 Create I/O Submission Queue (01h): Supported 00:10:07.055 Get Log Page (02h): Supported 00:10:07.055 Delete I/O Completion Queue (04h): Supported 00:10:07.055 Create I/O Completion Queue (05h): Supported 00:10:07.055 Identify (06h): Supported 00:10:07.055 Abort (08h): Supported 00:10:07.055 Set Features (09h): Supported 00:10:07.055 Get Features (0Ah): Supported 00:10:07.055 Asynchronous Event Request (0Ch): Supported 00:10:07.055 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:07.055 Directive Send (19h): Supported 00:10:07.055 Directive Receive (1Ah): Supported 00:10:07.055 Virtualization Management (1Ch): Supported 00:10:07.055 Doorbell Buffer Config (7Ch): Supported 00:10:07.055 Format NVM (80h): Supported LBA-Change 00:10:07.055 I/O Commands 00:10:07.055 ------------ 00:10:07.055 Flush (00h): Supported LBA-Change 00:10:07.055 Write (01h): Supported LBA-Change 00:10:07.055 Read (02h): Supported 00:10:07.055 Compare (05h): Supported 00:10:07.055 Write Zeroes (08h): Supported LBA-Change 00:10:07.055 Dataset Management (09h): Supported LBA-Change 00:10:07.055 Unknown (0Ch): Supported 00:10:07.055 Unknown (12h): Supported 00:10:07.055 Copy (19h): Supported LBA-Change 00:10:07.055 Unknown (1Dh): Supported LBA-Change 00:10:07.055 00:10:07.055 Error Log 00:10:07.055 ========= 00:10:07.055 00:10:07.055 Arbitration 00:10:07.055 =========== 00:10:07.055 Arbitration Burst: no limit 00:10:07.055 00:10:07.055 Power Management 00:10:07.055 ================ 00:10:07.055 Number of Power States: 1 00:10:07.055 Current Power State: Power State #0 00:10:07.055 Power State #0: 00:10:07.055 Max Power: 25.00 W 00:10:07.055 Non-Operational State: Operational 00:10:07.055 Entry Latency: 16 microseconds 00:10:07.055 Exit Latency: 4 microseconds 00:10:07.055 Relative Read Throughput: 0 00:10:07.055 Relative Read Latency: 0 00:10:07.055 Relative Write Throughput: 0 00:10:07.055 Relative Write Latency: 0 00:10:07.055 Idle Power: Not Reported 00:10:07.055 Active Power: Not Reported 00:10:07.055 Non-Operational Permissive Mode: Not Supported 00:10:07.055 00:10:07.055 Health Information 00:10:07.055 ================== 00:10:07.055 Critical Warnings: 00:10:07.055 Available Spare Space: OK 00:10:07.055 Temperature: OK 00:10:07.055 Device Reliability: OK 00:10:07.055 Read Only: No 00:10:07.055 Volatile Memory Backup: OK 00:10:07.055 Current Temperature: 323 Kelvin (50 Celsius) 00:10:07.055 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:07.055 Available Spare: 0% 00:10:07.055 Available Spare Threshold: 0% 00:10:07.055 Life Percentage Used: 0% 00:10:07.055 Data Units Read: 1151 00:10:07.055 Data Units Written: 1018 00:10:07.055 Host Read Commands: 49729 00:10:07.055 Host Write Commands: 48522 00:10:07.055 Controller Busy Time: 0 minutes 00:10:07.055 Power Cycles: 0 00:10:07.055 Power On Hours: 0 hours 00:10:07.055 Unsafe Shutdowns: 0 00:10:07.055 Unrecoverable Media Errors: 0 00:10:07.055 Lifetime Error Log Entries: 0 00:10:07.055 Warning Temperature Time: 0 minutes 00:10:07.055 Critical Temperature Time: 0 minutes 00:10:07.055 00:10:07.055 Number of Queues 00:10:07.055 ================ 00:10:07.055 Number of I/O Submission Queues: 64 00:10:07.055 Number of I/O Completion Queues: 64 00:10:07.055 00:10:07.055 ZNS Specific Controller Data 00:10:07.055 ============================ 00:10:07.055 Zone Append Size Limit: 0 00:10:07.055 00:10:07.055 00:10:07.055 Active Namespaces 00:10:07.055 ================= 00:10:07.056 Namespace ID:1 00:10:07.056 Error Recovery Timeout: Unlimited 00:10:07.056 Command Set Identifier: NVM (00h) 00:10:07.056 Deallocate: Supported 00:10:07.056 Deallocated/Unwritten Error: Supported 00:10:07.056 Deallocated Read Value: All 0x00 00:10:07.056 Deallocate in Write Zeroes: Not Supported 00:10:07.056 Deallocated Guard Field: 0xFFFF 00:10:07.056 Flush: Supported 00:10:07.056 Reservation: Not Supported 00:10:07.056 Namespace Sharing Capabilities: Private 00:10:07.056 Size (in LBAs): 1310720 (5GiB) 00:10:07.056 Capacity (in LBAs): 1310720 (5GiB) 00:10:07.056 Utilization (in LBAs): 1310720 (5GiB) 00:10:07.056 Thin Provisioning: Not Supported 00:10:07.056 Per-NS Atomic Units: No 00:10:07.056 Maximum Single Source Range Length: 128 00:10:07.056 Maximum Copy Length: 128 00:10:07.056 Maximum Source Range Count: 128 00:10:07.056 NGUID/EUI64 Never Reused: No 00:10:07.056 Namespace Write Protected: No 00:10:07.056 Number of LBA Formats: 8 00:10:07.056 Current LBA Format: LBA Format #04 00:10:07.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:07.056 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:07.056 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:07.056 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:07.056 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:07.056 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:07.056 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:07.056 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:07.056 00:10:07.056 NVM Specific Namespace Data 00:10:07.056 =========================== 00:10:07.056 Logical Block Storage Tag Mask: 0 00:10:07.056 Protection Information Capabilities: 00:10:07.056 16b Guard Protection Information Storage Tag Support: No 00:10:07.056 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:07.056 Storage Tag Check Read Support: No 00:10:07.056 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.056 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:07.056 08:30:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:07.317 ===================================================== 00:10:07.317 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:07.317 ===================================================== 00:10:07.317 Controller Capabilities/Features 00:10:07.317 ================================ 00:10:07.317 Vendor ID: 1b36 00:10:07.317 Subsystem Vendor ID: 1af4 00:10:07.317 Serial Number: 12342 00:10:07.317 Model Number: QEMU NVMe Ctrl 00:10:07.317 Firmware Version: 8.0.0 00:10:07.317 Recommended Arb Burst: 6 00:10:07.317 IEEE OUI Identifier: 00 54 52 00:10:07.317 Multi-path I/O 00:10:07.317 May have multiple subsystem ports: No 00:10:07.317 May have multiple controllers: No 00:10:07.317 Associated with SR-IOV VF: No 00:10:07.317 Max Data Transfer Size: 524288 00:10:07.317 Max Number of Namespaces: 256 00:10:07.317 Max Number of I/O Queues: 64 00:10:07.317 NVMe Specification Version (VS): 1.4 00:10:07.317 NVMe Specification Version (Identify): 1.4 00:10:07.317 Maximum Queue Entries: 2048 00:10:07.317 Contiguous Queues Required: Yes 00:10:07.317 Arbitration Mechanisms Supported 00:10:07.317 Weighted Round Robin: Not Supported 00:10:07.317 Vendor Specific: Not Supported 00:10:07.317 Reset Timeout: 7500 ms 00:10:07.317 Doorbell Stride: 4 bytes 00:10:07.317 NVM Subsystem Reset: Not Supported 00:10:07.317 Command Sets Supported 00:10:07.317 NVM Command Set: Supported 00:10:07.317 Boot Partition: Not Supported 00:10:07.317 Memory Page Size Minimum: 4096 bytes 00:10:07.317 Memory Page Size Maximum: 65536 bytes 00:10:07.317 Persistent Memory Region: Not Supported 00:10:07.317 Optional Asynchronous Events Supported 00:10:07.317 Namespace Attribute Notices: Supported 00:10:07.317 Firmware Activation Notices: Not Supported 00:10:07.317 ANA Change Notices: Not Supported 00:10:07.317 PLE Aggregate Log Change Notices: Not Supported 00:10:07.317 LBA Status Info Alert Notices: Not Supported 00:10:07.317 EGE Aggregate Log Change Notices: Not Supported 00:10:07.317 Normal NVM Subsystem Shutdown event: Not Supported 00:10:07.317 Zone Descriptor Change Notices: Not Supported 00:10:07.317 Discovery Log Change Notices: Not Supported 00:10:07.317 Controller Attributes 00:10:07.317 128-bit Host Identifier: Not Supported 00:10:07.317 Non-Operational Permissive Mode: Not Supported 00:10:07.317 NVM Sets: Not Supported 00:10:07.317 Read Recovery Levels: Not Supported 00:10:07.317 Endurance Groups: Not Supported 00:10:07.317 Predictable Latency Mode: Not Supported 00:10:07.317 Traffic Based Keep ALive: Not Supported 00:10:07.317 Namespace Granularity: Not Supported 00:10:07.317 SQ Associations: Not Supported 00:10:07.317 UUID List: Not Supported 00:10:07.317 Multi-Domain Subsystem: Not Supported 00:10:07.317 Fixed Capacity Management: Not Supported 00:10:07.317 Variable Capacity Management: Not Supported 00:10:07.317 Delete Endurance Group: Not Supported 00:10:07.317 Delete NVM Set: Not Supported 00:10:07.317 Extended LBA Formats Supported: Supported 00:10:07.317 Flexible Data Placement Supported: Not Supported 00:10:07.317 00:10:07.317 Controller Memory Buffer Support 00:10:07.317 ================================ 00:10:07.317 Supported: No 00:10:07.317 00:10:07.317 Persistent Memory Region Support 00:10:07.317 ================================ 00:10:07.317 Supported: No 00:10:07.317 00:10:07.317 Admin Command Set Attributes 00:10:07.317 ============================ 00:10:07.317 Security Send/Receive: Not Supported 00:10:07.317 Format NVM: Supported 00:10:07.317 Firmware Activate/Download: Not Supported 00:10:07.317 Namespace Management: Supported 00:10:07.317 Device Self-Test: Not Supported 00:10:07.317 Directives: Supported 00:10:07.317 NVMe-MI: Not Supported 00:10:07.317 Virtualization Management: Not Supported 00:10:07.317 Doorbell Buffer Config: Supported 00:10:07.317 Get LBA Status Capability: Not Supported 00:10:07.317 Command & Feature Lockdown Capability: Not Supported 00:10:07.317 Abort Command Limit: 4 00:10:07.317 Async Event Request Limit: 4 00:10:07.317 Number of Firmware Slots: N/A 00:10:07.317 Firmware Slot 1 Read-Only: N/A 00:10:07.317 Firmware Activation Without Reset: N/A 00:10:07.317 Multiple Update Detection Support: N/A 00:10:07.317 Firmware Update Granularity: No Information Provided 00:10:07.317 Per-Namespace SMART Log: Yes 00:10:07.317 Asymmetric Namespace Access Log Page: Not Supported 00:10:07.317 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:07.317 Command Effects Log Page: Supported 00:10:07.317 Get Log Page Extended Data: Supported 00:10:07.317 Telemetry Log Pages: Not Supported 00:10:07.317 Persistent Event Log Pages: Not Supported 00:10:07.317 Supported Log Pages Log Page: May Support 00:10:07.317 Commands Supported & Effects Log Page: Not Supported 00:10:07.317 Feature Identifiers & Effects Log Page:May Support 00:10:07.317 NVMe-MI Commands & Effects Log Page: May Support 00:10:07.317 Data Area 4 for Telemetry Log: Not Supported 00:10:07.317 Error Log Page Entries Supported: 1 00:10:07.317 Keep Alive: Not Supported 00:10:07.317 00:10:07.317 NVM Command Set Attributes 00:10:07.317 ========================== 00:10:07.317 Submission Queue Entry Size 00:10:07.317 Max: 64 00:10:07.317 Min: 64 00:10:07.317 Completion Queue Entry Size 00:10:07.317 Max: 16 00:10:07.317 Min: 16 00:10:07.317 Number of Namespaces: 256 00:10:07.317 Compare Command: Supported 00:10:07.317 Write Uncorrectable Command: Not Supported 00:10:07.317 Dataset Management Command: Supported 00:10:07.317 Write Zeroes Command: Supported 00:10:07.317 Set Features Save Field: Supported 00:10:07.317 Reservations: Not Supported 00:10:07.317 Timestamp: Supported 00:10:07.317 Copy: Supported 00:10:07.317 Volatile Write Cache: Present 00:10:07.317 Atomic Write Unit (Normal): 1 00:10:07.317 Atomic Write Unit (PFail): 1 00:10:07.317 Atomic Compare & Write Unit: 1 00:10:07.317 Fused Compare & Write: Not Supported 00:10:07.317 Scatter-Gather List 00:10:07.317 SGL Command Set: Supported 00:10:07.317 SGL Keyed: Not Supported 00:10:07.317 SGL Bit Bucket Descriptor: Not Supported 00:10:07.317 SGL Metadata Pointer: Not Supported 00:10:07.317 Oversized SGL: Not Supported 00:10:07.317 SGL Metadata Address: Not Supported 00:10:07.317 SGL Offset: Not Supported 00:10:07.317 Transport SGL Data Block: Not Supported 00:10:07.317 Replay Protected Memory Block: Not Supported 00:10:07.317 00:10:07.317 Firmware Slot Information 00:10:07.317 ========================= 00:10:07.317 Active slot: 1 00:10:07.317 Slot 1 Firmware Revision: 1.0 00:10:07.317 00:10:07.317 00:10:07.317 Commands Supported and Effects 00:10:07.317 ============================== 00:10:07.317 Admin Commands 00:10:07.317 -------------- 00:10:07.317 Delete I/O Submission Queue (00h): Supported 00:10:07.317 Create I/O Submission Queue (01h): Supported 00:10:07.317 Get Log Page (02h): Supported 00:10:07.317 Delete I/O Completion Queue (04h): Supported 00:10:07.317 Create I/O Completion Queue (05h): Supported 00:10:07.317 Identify (06h): Supported 00:10:07.317 Abort (08h): Supported 00:10:07.317 Set Features (09h): Supported 00:10:07.317 Get Features (0Ah): Supported 00:10:07.317 Asynchronous Event Request (0Ch): Supported 00:10:07.317 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:07.317 Directive Send (19h): Supported 00:10:07.317 Directive Receive (1Ah): Supported 00:10:07.317 Virtualization Management (1Ch): Supported 00:10:07.317 Doorbell Buffer Config (7Ch): Supported 00:10:07.317 Format NVM (80h): Supported LBA-Change 00:10:07.317 I/O Commands 00:10:07.317 ------------ 00:10:07.317 Flush (00h): Supported LBA-Change 00:10:07.317 Write (01h): Supported LBA-Change 00:10:07.317 Read (02h): Supported 00:10:07.317 Compare (05h): Supported 00:10:07.317 Write Zeroes (08h): Supported LBA-Change 00:10:07.317 Dataset Management (09h): Supported LBA-Change 00:10:07.317 Unknown (0Ch): Supported 00:10:07.317 Unknown (12h): Supported 00:10:07.317 Copy (19h): Supported LBA-Change 00:10:07.318 Unknown (1Dh): Supported LBA-Change 00:10:07.318 00:10:07.318 Error Log 00:10:07.318 ========= 00:10:07.318 00:10:07.318 Arbitration 00:10:07.318 =========== 00:10:07.318 Arbitration Burst: no limit 00:10:07.318 00:10:07.318 Power Management 00:10:07.318 ================ 00:10:07.318 Number of Power States: 1 00:10:07.318 Current Power State: Power State #0 00:10:07.318 Power State #0: 00:10:07.318 Max Power: 25.00 W 00:10:07.318 Non-Operational State: Operational 00:10:07.318 Entry Latency: 16 microseconds 00:10:07.318 Exit Latency: 4 microseconds 00:10:07.318 Relative Read Throughput: 0 00:10:07.318 Relative Read Latency: 0 00:10:07.318 Relative Write Throughput: 0 00:10:07.318 Relative Write Latency: 0 00:10:07.318 Idle Power: Not Reported 00:10:07.318 Active Power: Not Reported 00:10:07.318 Non-Operational Permissive Mode: Not Supported 00:10:07.318 00:10:07.318 Health Information 00:10:07.318 ================== 00:10:07.318 Critical Warnings: 00:10:07.318 Available Spare Space: OK 00:10:07.318 Temperature: OK 00:10:07.318 Device Reliability: OK 00:10:07.318 Read Only: No 00:10:07.318 Volatile Memory Backup: OK 00:10:07.318 Current Temperature: 323 Kelvin (50 Celsius) 00:10:07.318 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:07.318 Available Spare: 0% 00:10:07.318 Available Spare Threshold: 0% 00:10:07.318 Life Percentage Used: 0% 00:10:07.318 Data Units Read: 2452 00:10:07.318 Data Units Written: 2239 00:10:07.318 Host Read Commands: 102376 00:10:07.318 Host Write Commands: 100646 00:10:07.318 Controller Busy Time: 0 minutes 00:10:07.318 Power Cycles: 0 00:10:07.318 Power On Hours: 0 hours 00:10:07.318 Unsafe Shutdowns: 0 00:10:07.318 Unrecoverable Media Errors: 0 00:10:07.318 Lifetime Error Log Entries: 0 00:10:07.318 Warning Temperature Time: 0 minutes 00:10:07.318 Critical Temperature Time: 0 minutes 00:10:07.318 00:10:07.318 Number of Queues 00:10:07.318 ================ 00:10:07.318 Number of I/O Submission Queues: 64 00:10:07.318 Number of I/O Completion Queues: 64 00:10:07.318 00:10:07.318 ZNS Specific Controller Data 00:10:07.318 ============================ 00:10:07.318 Zone Append Size Limit: 0 00:10:07.318 00:10:07.318 00:10:07.318 Active Namespaces 00:10:07.318 ================= 00:10:07.318 Namespace ID:1 00:10:07.318 Error Recovery Timeout: Unlimited 00:10:07.318 Command Set Identifier: NVM (00h) 00:10:07.318 Deallocate: Supported 00:10:07.318 Deallocated/Unwritten Error: Supported 00:10:07.318 Deallocated Read Value: All 0x00 00:10:07.318 Deallocate in Write Zeroes: Not Supported 00:10:07.318 Deallocated Guard Field: 0xFFFF 00:10:07.318 Flush: Supported 00:10:07.318 Reservation: Not Supported 00:10:07.318 Namespace Sharing Capabilities: Private 00:10:07.318 Size (in LBAs): 1048576 (4GiB) 00:10:07.318 Capacity (in LBAs): 1048576 (4GiB) 00:10:07.318 Utilization (in LBAs): 1048576 (4GiB) 00:10:07.318 Thin Provisioning: Not Supported 00:10:07.318 Per-NS Atomic Units: No 00:10:07.318 Maximum Single Source Range Length: 128 00:10:07.318 Maximum Copy Length: 128 00:10:07.318 Maximum Source Range Count: 128 00:10:07.318 NGUID/EUI64 Never Reused: No 00:10:07.318 Namespace Write Protected: No 00:10:07.318 Number of LBA Formats: 8 00:10:07.318 Current LBA Format: LBA Format #04 00:10:07.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:07.318 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:07.318 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:07.318 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:07.318 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:07.318 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:07.318 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:07.318 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:07.318 00:10:07.318 NVM Specific Namespace Data 00:10:07.318 =========================== 00:10:07.318 Logical Block Storage Tag Mask: 0 00:10:07.318 Protection Information Capabilities: 00:10:07.318 16b Guard Protection Information Storage Tag Support: No 00:10:07.318 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:07.318 Storage Tag Check Read Support: No 00:10:07.318 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Namespace ID:2 00:10:07.318 Error Recovery Timeout: Unlimited 00:10:07.318 Command Set Identifier: NVM (00h) 00:10:07.318 Deallocate: Supported 00:10:07.318 Deallocated/Unwritten Error: Supported 00:10:07.318 Deallocated Read Value: All 0x00 00:10:07.318 Deallocate in Write Zeroes: Not Supported 00:10:07.318 Deallocated Guard Field: 0xFFFF 00:10:07.318 Flush: Supported 00:10:07.318 Reservation: Not Supported 00:10:07.318 Namespace Sharing Capabilities: Private 00:10:07.318 Size (in LBAs): 1048576 (4GiB) 00:10:07.318 Capacity (in LBAs): 1048576 (4GiB) 00:10:07.318 Utilization (in LBAs): 1048576 (4GiB) 00:10:07.318 Thin Provisioning: Not Supported 00:10:07.318 Per-NS Atomic Units: No 00:10:07.318 Maximum Single Source Range Length: 128 00:10:07.318 Maximum Copy Length: 128 00:10:07.318 Maximum Source Range Count: 128 00:10:07.318 NGUID/EUI64 Never Reused: No 00:10:07.318 Namespace Write Protected: No 00:10:07.318 Number of LBA Formats: 8 00:10:07.318 Current LBA Format: LBA Format #04 00:10:07.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:07.318 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:07.318 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:07.318 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:07.318 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:07.318 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:07.318 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:07.318 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:07.318 00:10:07.318 NVM Specific Namespace Data 00:10:07.318 =========================== 00:10:07.318 Logical Block Storage Tag Mask: 0 00:10:07.318 Protection Information Capabilities: 00:10:07.318 16b Guard Protection Information Storage Tag Support: No 00:10:07.318 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:07.318 Storage Tag Check Read Support: No 00:10:07.318 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.318 Namespace ID:3 00:10:07.318 Error Recovery Timeout: Unlimited 00:10:07.318 Command Set Identifier: NVM (00h) 00:10:07.318 Deallocate: Supported 00:10:07.318 Deallocated/Unwritten Error: Supported 00:10:07.318 Deallocated Read Value: All 0x00 00:10:07.318 Deallocate in Write Zeroes: Not Supported 00:10:07.318 Deallocated Guard Field: 0xFFFF 00:10:07.318 Flush: Supported 00:10:07.318 Reservation: Not Supported 00:10:07.318 Namespace Sharing Capabilities: Private 00:10:07.318 Size (in LBAs): 1048576 (4GiB) 00:10:07.318 Capacity (in LBAs): 1048576 (4GiB) 00:10:07.318 Utilization (in LBAs): 1048576 (4GiB) 00:10:07.318 Thin Provisioning: Not Supported 00:10:07.318 Per-NS Atomic Units: No 00:10:07.318 Maximum Single Source Range Length: 128 00:10:07.318 Maximum Copy Length: 128 00:10:07.318 Maximum Source Range Count: 128 00:10:07.318 NGUID/EUI64 Never Reused: No 00:10:07.318 Namespace Write Protected: No 00:10:07.318 Number of LBA Formats: 8 00:10:07.318 Current LBA Format: LBA Format #04 00:10:07.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:07.318 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:07.318 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:07.318 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:07.318 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:07.318 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:07.318 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:07.318 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:07.318 00:10:07.318 NVM Specific Namespace Data 00:10:07.318 =========================== 00:10:07.318 Logical Block Storage Tag Mask: 0 00:10:07.318 Protection Information Capabilities: 00:10:07.318 16b Guard Protection Information Storage Tag Support: No 00:10:07.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:07.319 Storage Tag Check Read Support: No 00:10:07.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.319 08:30:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:07.319 08:30:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:07.579 ===================================================== 00:10:07.579 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:07.579 ===================================================== 00:10:07.579 Controller Capabilities/Features 00:10:07.579 ================================ 00:10:07.579 Vendor ID: 1b36 00:10:07.579 Subsystem Vendor ID: 1af4 00:10:07.579 Serial Number: 12343 00:10:07.579 Model Number: QEMU NVMe Ctrl 00:10:07.579 Firmware Version: 8.0.0 00:10:07.579 Recommended Arb Burst: 6 00:10:07.579 IEEE OUI Identifier: 00 54 52 00:10:07.579 Multi-path I/O 00:10:07.579 May have multiple subsystem ports: No 00:10:07.579 May have multiple controllers: Yes 00:10:07.579 Associated with SR-IOV VF: No 00:10:07.579 Max Data Transfer Size: 524288 00:10:07.579 Max Number of Namespaces: 256 00:10:07.579 Max Number of I/O Queues: 64 00:10:07.579 NVMe Specification Version (VS): 1.4 00:10:07.579 NVMe Specification Version (Identify): 1.4 00:10:07.579 Maximum Queue Entries: 2048 00:10:07.579 Contiguous Queues Required: Yes 00:10:07.579 Arbitration Mechanisms Supported 00:10:07.579 Weighted Round Robin: Not Supported 00:10:07.579 Vendor Specific: Not Supported 00:10:07.579 Reset Timeout: 7500 ms 00:10:07.579 Doorbell Stride: 4 bytes 00:10:07.579 NVM Subsystem Reset: Not Supported 00:10:07.579 Command Sets Supported 00:10:07.579 NVM Command Set: Supported 00:10:07.579 Boot Partition: Not Supported 00:10:07.579 Memory Page Size Minimum: 4096 bytes 00:10:07.579 Memory Page Size Maximum: 65536 bytes 00:10:07.579 Persistent Memory Region: Not Supported 00:10:07.579 Optional Asynchronous Events Supported 00:10:07.579 Namespace Attribute Notices: Supported 00:10:07.579 Firmware Activation Notices: Not Supported 00:10:07.579 ANA Change Notices: Not Supported 00:10:07.579 PLE Aggregate Log Change Notices: Not Supported 00:10:07.579 LBA Status Info Alert Notices: Not Supported 00:10:07.579 EGE Aggregate Log Change Notices: Not Supported 00:10:07.579 Normal NVM Subsystem Shutdown event: Not Supported 00:10:07.579 Zone Descriptor Change Notices: Not Supported 00:10:07.579 Discovery Log Change Notices: Not Supported 00:10:07.579 Controller Attributes 00:10:07.579 128-bit Host Identifier: Not Supported 00:10:07.579 Non-Operational Permissive Mode: Not Supported 00:10:07.579 NVM Sets: Not Supported 00:10:07.579 Read Recovery Levels: Not Supported 00:10:07.579 Endurance Groups: Supported 00:10:07.579 Predictable Latency Mode: Not Supported 00:10:07.579 Traffic Based Keep ALive: Not Supported 00:10:07.579 Namespace Granularity: Not Supported 00:10:07.579 SQ Associations: Not Supported 00:10:07.579 UUID List: Not Supported 00:10:07.579 Multi-Domain Subsystem: Not Supported 00:10:07.579 Fixed Capacity Management: Not Supported 00:10:07.579 Variable Capacity Management: Not Supported 00:10:07.579 Delete Endurance Group: Not Supported 00:10:07.579 Delete NVM Set: Not Supported 00:10:07.579 Extended LBA Formats Supported: Supported 00:10:07.579 Flexible Data Placement Supported: Supported 00:10:07.579 00:10:07.579 Controller Memory Buffer Support 00:10:07.579 ================================ 00:10:07.579 Supported: No 00:10:07.579 00:10:07.579 Persistent Memory Region Support 00:10:07.579 ================================ 00:10:07.579 Supported: No 00:10:07.579 00:10:07.579 Admin Command Set Attributes 00:10:07.579 ============================ 00:10:07.579 Security Send/Receive: Not Supported 00:10:07.579 Format NVM: Supported 00:10:07.579 Firmware Activate/Download: Not Supported 00:10:07.579 Namespace Management: Supported 00:10:07.579 Device Self-Test: Not Supported 00:10:07.579 Directives: Supported 00:10:07.579 NVMe-MI: Not Supported 00:10:07.579 Virtualization Management: Not Supported 00:10:07.579 Doorbell Buffer Config: Supported 00:10:07.579 Get LBA Status Capability: Not Supported 00:10:07.579 Command & Feature Lockdown Capability: Not Supported 00:10:07.579 Abort Command Limit: 4 00:10:07.579 Async Event Request Limit: 4 00:10:07.579 Number of Firmware Slots: N/A 00:10:07.579 Firmware Slot 1 Read-Only: N/A 00:10:07.579 Firmware Activation Without Reset: N/A 00:10:07.579 Multiple Update Detection Support: N/A 00:10:07.579 Firmware Update Granularity: No Information Provided 00:10:07.579 Per-Namespace SMART Log: Yes 00:10:07.579 Asymmetric Namespace Access Log Page: Not Supported 00:10:07.579 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:07.579 Command Effects Log Page: Supported 00:10:07.579 Get Log Page Extended Data: Supported 00:10:07.579 Telemetry Log Pages: Not Supported 00:10:07.579 Persistent Event Log Pages: Not Supported 00:10:07.579 Supported Log Pages Log Page: May Support 00:10:07.579 Commands Supported & Effects Log Page: Not Supported 00:10:07.579 Feature Identifiers & Effects Log Page:May Support 00:10:07.579 NVMe-MI Commands & Effects Log Page: May Support 00:10:07.579 Data Area 4 for Telemetry Log: Not Supported 00:10:07.579 Error Log Page Entries Supported: 1 00:10:07.579 Keep Alive: Not Supported 00:10:07.579 00:10:07.579 NVM Command Set Attributes 00:10:07.579 ========================== 00:10:07.579 Submission Queue Entry Size 00:10:07.579 Max: 64 00:10:07.579 Min: 64 00:10:07.579 Completion Queue Entry Size 00:10:07.579 Max: 16 00:10:07.579 Min: 16 00:10:07.579 Number of Namespaces: 256 00:10:07.580 Compare Command: Supported 00:10:07.580 Write Uncorrectable Command: Not Supported 00:10:07.580 Dataset Management Command: Supported 00:10:07.580 Write Zeroes Command: Supported 00:10:07.580 Set Features Save Field: Supported 00:10:07.580 Reservations: Not Supported 00:10:07.580 Timestamp: Supported 00:10:07.580 Copy: Supported 00:10:07.580 Volatile Write Cache: Present 00:10:07.580 Atomic Write Unit (Normal): 1 00:10:07.580 Atomic Write Unit (PFail): 1 00:10:07.580 Atomic Compare & Write Unit: 1 00:10:07.580 Fused Compare & Write: Not Supported 00:10:07.580 Scatter-Gather List 00:10:07.580 SGL Command Set: Supported 00:10:07.580 SGL Keyed: Not Supported 00:10:07.580 SGL Bit Bucket Descriptor: Not Supported 00:10:07.580 SGL Metadata Pointer: Not Supported 00:10:07.580 Oversized SGL: Not Supported 00:10:07.580 SGL Metadata Address: Not Supported 00:10:07.580 SGL Offset: Not Supported 00:10:07.580 Transport SGL Data Block: Not Supported 00:10:07.580 Replay Protected Memory Block: Not Supported 00:10:07.580 00:10:07.580 Firmware Slot Information 00:10:07.580 ========================= 00:10:07.580 Active slot: 1 00:10:07.580 Slot 1 Firmware Revision: 1.0 00:10:07.580 00:10:07.580 00:10:07.580 Commands Supported and Effects 00:10:07.580 ============================== 00:10:07.580 Admin Commands 00:10:07.580 -------------- 00:10:07.580 Delete I/O Submission Queue (00h): Supported 00:10:07.580 Create I/O Submission Queue (01h): Supported 00:10:07.580 Get Log Page (02h): Supported 00:10:07.580 Delete I/O Completion Queue (04h): Supported 00:10:07.580 Create I/O Completion Queue (05h): Supported 00:10:07.580 Identify (06h): Supported 00:10:07.580 Abort (08h): Supported 00:10:07.580 Set Features (09h): Supported 00:10:07.580 Get Features (0Ah): Supported 00:10:07.580 Asynchronous Event Request (0Ch): Supported 00:10:07.580 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:07.580 Directive Send (19h): Supported 00:10:07.580 Directive Receive (1Ah): Supported 00:10:07.580 Virtualization Management (1Ch): Supported 00:10:07.580 Doorbell Buffer Config (7Ch): Supported 00:10:07.580 Format NVM (80h): Supported LBA-Change 00:10:07.580 I/O Commands 00:10:07.580 ------------ 00:10:07.580 Flush (00h): Supported LBA-Change 00:10:07.580 Write (01h): Supported LBA-Change 00:10:07.580 Read (02h): Supported 00:10:07.580 Compare (05h): Supported 00:10:07.580 Write Zeroes (08h): Supported LBA-Change 00:10:07.580 Dataset Management (09h): Supported LBA-Change 00:10:07.580 Unknown (0Ch): Supported 00:10:07.580 Unknown (12h): Supported 00:10:07.580 Copy (19h): Supported LBA-Change 00:10:07.580 Unknown (1Dh): Supported LBA-Change 00:10:07.580 00:10:07.580 Error Log 00:10:07.580 ========= 00:10:07.580 00:10:07.580 Arbitration 00:10:07.580 =========== 00:10:07.580 Arbitration Burst: no limit 00:10:07.580 00:10:07.580 Power Management 00:10:07.580 ================ 00:10:07.580 Number of Power States: 1 00:10:07.580 Current Power State: Power State #0 00:10:07.580 Power State #0: 00:10:07.580 Max Power: 25.00 W 00:10:07.580 Non-Operational State: Operational 00:10:07.580 Entry Latency: 16 microseconds 00:10:07.580 Exit Latency: 4 microseconds 00:10:07.580 Relative Read Throughput: 0 00:10:07.580 Relative Read Latency: 0 00:10:07.580 Relative Write Throughput: 0 00:10:07.580 Relative Write Latency: 0 00:10:07.580 Idle Power: Not Reported 00:10:07.580 Active Power: Not Reported 00:10:07.580 Non-Operational Permissive Mode: Not Supported 00:10:07.580 00:10:07.580 Health Information 00:10:07.580 ================== 00:10:07.580 Critical Warnings: 00:10:07.580 Available Spare Space: OK 00:10:07.580 Temperature: OK 00:10:07.580 Device Reliability: OK 00:10:07.580 Read Only: No 00:10:07.580 Volatile Memory Backup: OK 00:10:07.580 Current Temperature: 323 Kelvin (50 Celsius) 00:10:07.580 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:07.580 Available Spare: 0% 00:10:07.580 Available Spare Threshold: 0% 00:10:07.580 Life Percentage Used: 0% 00:10:07.580 Data Units Read: 968 00:10:07.580 Data Units Written: 897 00:10:07.580 Host Read Commands: 35301 00:10:07.580 Host Write Commands: 34724 00:10:07.580 Controller Busy Time: 0 minutes 00:10:07.580 Power Cycles: 0 00:10:07.580 Power On Hours: 0 hours 00:10:07.580 Unsafe Shutdowns: 0 00:10:07.580 Unrecoverable Media Errors: 0 00:10:07.580 Lifetime Error Log Entries: 0 00:10:07.580 Warning Temperature Time: 0 minutes 00:10:07.580 Critical Temperature Time: 0 minutes 00:10:07.580 00:10:07.580 Number of Queues 00:10:07.580 ================ 00:10:07.580 Number of I/O Submission Queues: 64 00:10:07.580 Number of I/O Completion Queues: 64 00:10:07.580 00:10:07.580 ZNS Specific Controller Data 00:10:07.580 ============================ 00:10:07.580 Zone Append Size Limit: 0 00:10:07.580 00:10:07.580 00:10:07.580 Active Namespaces 00:10:07.580 ================= 00:10:07.580 Namespace ID:1 00:10:07.580 Error Recovery Timeout: Unlimited 00:10:07.580 Command Set Identifier: NVM (00h) 00:10:07.580 Deallocate: Supported 00:10:07.580 Deallocated/Unwritten Error: Supported 00:10:07.580 Deallocated Read Value: All 0x00 00:10:07.580 Deallocate in Write Zeroes: Not Supported 00:10:07.580 Deallocated Guard Field: 0xFFFF 00:10:07.580 Flush: Supported 00:10:07.580 Reservation: Not Supported 00:10:07.580 Namespace Sharing Capabilities: Multiple Controllers 00:10:07.580 Size (in LBAs): 262144 (1GiB) 00:10:07.580 Capacity (in LBAs): 262144 (1GiB) 00:10:07.580 Utilization (in LBAs): 262144 (1GiB) 00:10:07.580 Thin Provisioning: Not Supported 00:10:07.580 Per-NS Atomic Units: No 00:10:07.580 Maximum Single Source Range Length: 128 00:10:07.580 Maximum Copy Length: 128 00:10:07.580 Maximum Source Range Count: 128 00:10:07.580 NGUID/EUI64 Never Reused: No 00:10:07.580 Namespace Write Protected: No 00:10:07.580 Endurance group ID: 1 00:10:07.580 Number of LBA Formats: 8 00:10:07.580 Current LBA Format: LBA Format #04 00:10:07.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:07.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:07.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:07.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:07.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:07.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:07.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:07.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:07.580 00:10:07.580 Get Feature FDP: 00:10:07.580 ================ 00:10:07.580 Enabled: Yes 00:10:07.580 FDP configuration index: 0 00:10:07.580 00:10:07.580 FDP configurations log page 00:10:07.580 =========================== 00:10:07.580 Number of FDP configurations: 1 00:10:07.580 Version: 0 00:10:07.580 Size: 112 00:10:07.580 FDP Configuration Descriptor: 0 00:10:07.580 Descriptor Size: 96 00:10:07.580 Reclaim Group Identifier format: 2 00:10:07.580 FDP Volatile Write Cache: Not Present 00:10:07.580 FDP Configuration: Valid 00:10:07.580 Vendor Specific Size: 0 00:10:07.580 Number of Reclaim Groups: 2 00:10:07.580 Number of Recalim Unit Handles: 8 00:10:07.580 Max Placement Identifiers: 128 00:10:07.580 Number of Namespaces Suppprted: 256 00:10:07.580 Reclaim unit Nominal Size: 6000000 bytes 00:10:07.580 Estimated Reclaim Unit Time Limit: Not Reported 00:10:07.580 RUH Desc #000: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #001: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #002: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #003: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #004: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #005: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #006: RUH Type: Initially Isolated 00:10:07.580 RUH Desc #007: RUH Type: Initially Isolated 00:10:07.580 00:10:07.580 FDP reclaim unit handle usage log page 00:10:07.580 ====================================== 00:10:07.580 Number of Reclaim Unit Handles: 8 00:10:07.580 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:07.580 RUH Usage Desc #001: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #002: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #003: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #004: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #005: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #006: RUH Attributes: Unused 00:10:07.580 RUH Usage Desc #007: RUH Attributes: Unused 00:10:07.580 00:10:07.580 FDP statistics log page 00:10:07.580 ======================= 00:10:07.580 Host bytes with metadata written: 578396160 00:10:07.580 Media bytes with metadata written: 578473984 00:10:07.580 Media bytes erased: 0 00:10:07.580 00:10:07.580 FDP events log page 00:10:07.580 =================== 00:10:07.580 Number of FDP events: 0 00:10:07.580 00:10:07.580 NVM Specific Namespace Data 00:10:07.580 =========================== 00:10:07.580 Logical Block Storage Tag Mask: 0 00:10:07.580 Protection Information Capabilities: 00:10:07.581 16b Guard Protection Information Storage Tag Support: No 00:10:07.581 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:07.581 Storage Tag Check Read Support: No 00:10:07.581 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:07.581 00:10:07.581 real 0m1.749s 00:10:07.581 user 0m0.656s 00:10:07.581 sys 0m0.883s 00:10:07.581 08:30:42 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.581 08:30:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:07.581 ************************************ 00:10:07.581 END TEST nvme_identify 00:10:07.581 ************************************ 00:10:07.840 08:30:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:07.840 08:30:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.840 08:30:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.840 08:30:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 ************************************ 00:10:07.840 START TEST nvme_perf 00:10:07.840 ************************************ 00:10:07.840 08:30:42 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:10:07.840 08:30:42 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:09.222 Initializing NVMe Controllers 00:10:09.222 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:09.222 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:09.222 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:09.222 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:09.222 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:09.222 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:09.222 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:09.222 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:09.222 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:09.222 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:09.222 Initialization complete. Launching workers. 00:10:09.222 ======================================================== 00:10:09.222 Latency(us) 00:10:09.222 Device Information : IOPS MiB/s Average min max 00:10:09.222 PCIE (0000:00:10.0) NSID 1 from core 0: 13991.75 163.97 9167.47 6625.87 51221.19 00:10:09.222 PCIE (0000:00:11.0) NSID 1 from core 0: 13991.75 163.97 9151.19 6688.36 48907.46 00:10:09.222 PCIE (0000:00:13.0) NSID 1 from core 0: 13991.75 163.97 9133.01 6905.63 47300.44 00:10:09.222 PCIE (0000:00:12.0) NSID 1 from core 0: 13991.75 163.97 9115.02 6831.58 45055.89 00:10:09.222 PCIE (0000:00:12.0) NSID 2 from core 0: 13991.75 163.97 9096.70 6778.92 42821.94 00:10:09.222 PCIE (0000:00:12.0) NSID 3 from core 0: 14055.64 164.71 9037.38 6648.78 35636.70 00:10:09.222 ======================================================== 00:10:09.222 Total : 84014.40 984.54 9116.73 6625.87 51221.19 00:10:09.222 00:10:09.222 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:09.222 ================================================================================= 00:10:09.222 1.00000% : 7106.313us 00:10:09.222 10.00000% : 7527.428us 00:10:09.222 25.00000% : 8211.740us 00:10:09.222 50.00000% : 8685.494us 00:10:09.222 75.00000% : 9106.609us 00:10:09.222 90.00000% : 9738.281us 00:10:09.222 95.00000% : 11738.577us 00:10:09.222 98.00000% : 15160.135us 00:10:09.222 99.00000% : 19266.005us 00:10:09.222 99.50000% : 44848.733us 00:10:09.222 99.90000% : 50954.898us 00:10:09.222 99.99000% : 51165.455us 00:10:09.222 99.99900% : 51376.013us 00:10:09.222 99.99990% : 51376.013us 00:10:09.222 99.99999% : 51376.013us 00:10:09.222 00:10:09.222 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:09.222 ================================================================================= 00:10:09.222 1.00000% : 7106.313us 00:10:09.222 10.00000% : 7580.067us 00:10:09.222 25.00000% : 8211.740us 00:10:09.222 50.00000% : 8685.494us 00:10:09.222 75.00000% : 9053.969us 00:10:09.222 90.00000% : 9790.920us 00:10:09.222 95.00000% : 11896.495us 00:10:09.222 98.00000% : 15370.692us 00:10:09.222 99.00000% : 18107.939us 00:10:09.222 99.50000% : 42743.158us 00:10:09.222 99.90000% : 48638.766us 00:10:09.222 99.99000% : 49059.881us 00:10:09.222 99.99900% : 49059.881us 00:10:09.222 99.99990% : 49059.881us 00:10:09.222 99.99999% : 49059.881us 00:10:09.222 00:10:09.222 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:09.222 ================================================================================= 00:10:09.222 1.00000% : 7211.592us 00:10:09.223 10.00000% : 7527.428us 00:10:09.223 25.00000% : 8211.740us 00:10:09.223 50.00000% : 8685.494us 00:10:09.223 75.00000% : 9053.969us 00:10:09.223 90.00000% : 9685.642us 00:10:09.223 95.00000% : 12001.773us 00:10:09.223 98.00000% : 16107.643us 00:10:09.223 99.00000% : 17160.431us 00:10:09.223 99.50000% : 40848.141us 00:10:09.223 99.90000% : 46954.307us 00:10:09.223 99.99000% : 47375.422us 00:10:09.223 99.99900% : 47375.422us 00:10:09.223 99.99990% : 47375.422us 00:10:09.223 99.99999% : 47375.422us 00:10:09.223 00:10:09.223 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:09.223 ================================================================================= 00:10:09.223 1.00000% : 7158.953us 00:10:09.223 10.00000% : 7580.067us 00:10:09.223 25.00000% : 8211.740us 00:10:09.223 50.00000% : 8685.494us 00:10:09.223 75.00000% : 9053.969us 00:10:09.223 90.00000% : 9685.642us 00:10:09.223 95.00000% : 12422.888us 00:10:09.223 98.00000% : 16002.365us 00:10:09.223 99.00000% : 17476.267us 00:10:09.223 99.50000% : 38532.010us 00:10:09.223 99.90000% : 44848.733us 00:10:09.223 99.99000% : 45059.290us 00:10:09.223 99.99900% : 45059.290us 00:10:09.223 99.99990% : 45059.290us 00:10:09.223 99.99999% : 45059.290us 00:10:09.223 00:10:09.223 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:09.223 ================================================================================= 00:10:09.223 1.00000% : 7158.953us 00:10:09.223 10.00000% : 7580.067us 00:10:09.223 25.00000% : 8211.740us 00:10:09.223 50.00000% : 8685.494us 00:10:09.223 75.00000% : 9053.969us 00:10:09.223 90.00000% : 9685.642us 00:10:09.223 95.00000% : 12528.167us 00:10:09.223 98.00000% : 15581.250us 00:10:09.223 99.00000% : 18107.939us 00:10:09.223 99.50000% : 36215.878us 00:10:09.223 99.90000% : 42532.601us 00:10:09.223 99.99000% : 42953.716us 00:10:09.223 99.99900% : 42953.716us 00:10:09.223 99.99990% : 42953.716us 00:10:09.223 99.99999% : 42953.716us 00:10:09.223 00:10:09.223 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:09.223 ================================================================================= 00:10:09.223 1.00000% : 7158.953us 00:10:09.223 10.00000% : 7580.067us 00:10:09.223 25.00000% : 8211.740us 00:10:09.223 50.00000% : 8685.494us 00:10:09.223 75.00000% : 9053.969us 00:10:09.223 90.00000% : 9790.920us 00:10:09.223 95.00000% : 12212.331us 00:10:09.223 98.00000% : 15054.856us 00:10:09.223 99.00000% : 18844.890us 00:10:09.223 99.50000% : 29478.040us 00:10:09.223 99.90000% : 35373.648us 00:10:09.223 99.99000% : 35794.763us 00:10:09.223 99.99900% : 35794.763us 00:10:09.223 99.99990% : 35794.763us 00:10:09.223 99.99999% : 35794.763us 00:10:09.223 00:10:09.223 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:09.223 ============================================================================== 00:10:09.223 Range in us Cumulative IO count 00:10:09.223 6606.239 - 6632.559: 0.0071% ( 1) 00:10:09.223 6632.559 - 6658.879: 0.0214% ( 2) 00:10:09.223 6658.879 - 6685.198: 0.0357% ( 2) 00:10:09.223 6685.198 - 6711.518: 0.0428% ( 1) 00:10:09.223 6711.518 - 6737.838: 0.0713% ( 4) 00:10:09.223 6737.838 - 6790.477: 0.1213% ( 7) 00:10:09.223 6790.477 - 6843.116: 0.1998% ( 11) 00:10:09.223 6843.116 - 6895.756: 0.3353% ( 19) 00:10:09.223 6895.756 - 6948.395: 0.5066% ( 24) 00:10:09.223 6948.395 - 7001.035: 0.6421% ( 19) 00:10:09.223 7001.035 - 7053.674: 0.9632% ( 45) 00:10:09.223 7053.674 - 7106.313: 1.4626% ( 70) 00:10:09.223 7106.313 - 7158.953: 2.1547% ( 97) 00:10:09.223 7158.953 - 7211.592: 3.0751% ( 129) 00:10:09.223 7211.592 - 7264.231: 4.1310% ( 148) 00:10:09.223 7264.231 - 7316.871: 5.3867% ( 176) 00:10:09.223 7316.871 - 7369.510: 6.5853% ( 168) 00:10:09.223 7369.510 - 7422.149: 8.0051% ( 199) 00:10:09.223 7422.149 - 7474.789: 9.1324% ( 158) 00:10:09.223 7474.789 - 7527.428: 10.4238% ( 181) 00:10:09.223 7527.428 - 7580.067: 11.4869% ( 149) 00:10:09.223 7580.067 - 7632.707: 12.4358% ( 133) 00:10:09.223 7632.707 - 7685.346: 13.3776% ( 132) 00:10:09.223 7685.346 - 7737.986: 14.2266% ( 119) 00:10:09.223 7737.986 - 7790.625: 15.3039% ( 151) 00:10:09.223 7790.625 - 7843.264: 16.4598% ( 162) 00:10:09.223 7843.264 - 7895.904: 17.7654% ( 183) 00:10:09.223 7895.904 - 7948.543: 19.1281% ( 191) 00:10:09.223 7948.543 - 8001.182: 20.4267% ( 182) 00:10:09.223 8001.182 - 8053.822: 21.8750% ( 203) 00:10:09.223 8053.822 - 8106.461: 23.1735% ( 182) 00:10:09.223 8106.461 - 8159.100: 24.7646% ( 223) 00:10:09.223 8159.100 - 8211.740: 26.4555% ( 237) 00:10:09.223 8211.740 - 8264.379: 28.6672% ( 310) 00:10:09.223 8264.379 - 8317.018: 31.0074% ( 328) 00:10:09.223 8317.018 - 8369.658: 33.5759% ( 360) 00:10:09.223 8369.658 - 8422.297: 36.2300% ( 372) 00:10:09.223 8422.297 - 8474.937: 39.2195% ( 419) 00:10:09.223 8474.937 - 8527.576: 42.5228% ( 463) 00:10:09.223 8527.576 - 8580.215: 45.9332% ( 478) 00:10:09.223 8580.215 - 8632.855: 49.4078% ( 487) 00:10:09.223 8632.855 - 8685.494: 52.9110% ( 491) 00:10:09.223 8685.494 - 8738.133: 56.2643% ( 470) 00:10:09.223 8738.133 - 8790.773: 59.7317% ( 486) 00:10:09.223 8790.773 - 8843.412: 62.9495% ( 451) 00:10:09.223 8843.412 - 8896.051: 65.9104% ( 415) 00:10:09.223 8896.051 - 8948.691: 68.7643% ( 400) 00:10:09.223 8948.691 - 9001.330: 71.4612% ( 378) 00:10:09.223 9001.330 - 9053.969: 74.1510% ( 377) 00:10:09.223 9053.969 - 9106.609: 76.7266% ( 361) 00:10:09.223 9106.609 - 9159.248: 79.2166% ( 349) 00:10:09.223 9159.248 - 9211.888: 81.4355% ( 311) 00:10:09.223 9211.888 - 9264.527: 83.2549% ( 255) 00:10:09.223 9264.527 - 9317.166: 84.7603% ( 211) 00:10:09.223 9317.166 - 9369.806: 86.0945% ( 187) 00:10:09.223 9369.806 - 9422.445: 87.2432% ( 161) 00:10:09.223 9422.445 - 9475.084: 88.1279% ( 124) 00:10:09.223 9475.084 - 9527.724: 88.7842% ( 92) 00:10:09.223 9527.724 - 9580.363: 89.2123% ( 60) 00:10:09.223 9580.363 - 9633.002: 89.5548% ( 48) 00:10:09.223 9633.002 - 9685.642: 89.8259% ( 38) 00:10:09.223 9685.642 - 9738.281: 90.1042% ( 39) 00:10:09.223 9738.281 - 9790.920: 90.3753% ( 38) 00:10:09.223 9790.920 - 9843.560: 90.6321% ( 36) 00:10:09.223 9843.560 - 9896.199: 90.8676% ( 33) 00:10:09.223 9896.199 - 9948.839: 91.0674% ( 28) 00:10:09.223 9948.839 - 10001.478: 91.2743% ( 29) 00:10:09.223 10001.478 - 10054.117: 91.5026% ( 32) 00:10:09.223 10054.117 - 10106.757: 91.6809% ( 25) 00:10:09.223 10106.757 - 10159.396: 91.8522% ( 24) 00:10:09.223 10159.396 - 10212.035: 91.9806% ( 18) 00:10:09.223 10212.035 - 10264.675: 92.1590% ( 25) 00:10:09.223 10264.675 - 10317.314: 92.3302% ( 24) 00:10:09.223 10317.314 - 10369.953: 92.5300% ( 28) 00:10:09.223 10369.953 - 10422.593: 92.6869% ( 22) 00:10:09.223 10422.593 - 10475.232: 92.8724% ( 26) 00:10:09.223 10475.232 - 10527.871: 93.0151% ( 20) 00:10:09.224 10527.871 - 10580.511: 93.1364% ( 17) 00:10:09.224 10580.511 - 10633.150: 93.2648% ( 18) 00:10:09.224 10633.150 - 10685.790: 93.3362% ( 10) 00:10:09.224 10685.790 - 10738.429: 93.4147% ( 11) 00:10:09.224 10738.429 - 10791.068: 93.4860% ( 10) 00:10:09.224 10791.068 - 10843.708: 93.5716% ( 12) 00:10:09.224 10843.708 - 10896.347: 93.6501% ( 11) 00:10:09.224 10896.347 - 10948.986: 93.7571% ( 15) 00:10:09.224 10948.986 - 11001.626: 93.8213% ( 9) 00:10:09.224 11001.626 - 11054.265: 93.9070% ( 12) 00:10:09.224 11054.265 - 11106.904: 94.0140% ( 15) 00:10:09.224 11106.904 - 11159.544: 94.1067% ( 13) 00:10:09.224 11159.544 - 11212.183: 94.2138% ( 15) 00:10:09.224 11212.183 - 11264.822: 94.2994% ( 12) 00:10:09.224 11264.822 - 11317.462: 94.3993% ( 14) 00:10:09.224 11317.462 - 11370.101: 94.4920% ( 13) 00:10:09.224 11370.101 - 11422.741: 94.5491% ( 8) 00:10:09.224 11422.741 - 11475.380: 94.6347% ( 12) 00:10:09.224 11475.380 - 11528.019: 94.7203% ( 12) 00:10:09.224 11528.019 - 11580.659: 94.7774% ( 8) 00:10:09.224 11580.659 - 11633.298: 94.8701% ( 13) 00:10:09.224 11633.298 - 11685.937: 94.9629% ( 13) 00:10:09.224 11685.937 - 11738.577: 95.0485% ( 12) 00:10:09.224 11738.577 - 11791.216: 95.0985% ( 7) 00:10:09.224 11791.216 - 11843.855: 95.1841% ( 12) 00:10:09.224 11843.855 - 11896.495: 95.2269% ( 6) 00:10:09.224 11896.495 - 11949.134: 95.3196% ( 13) 00:10:09.224 11949.134 - 12001.773: 95.3624% ( 6) 00:10:09.224 12001.773 - 12054.413: 95.4338% ( 10) 00:10:09.224 12054.413 - 12107.052: 95.4695% ( 5) 00:10:09.224 12107.052 - 12159.692: 95.5123% ( 6) 00:10:09.224 12159.692 - 12212.331: 95.5479% ( 5) 00:10:09.224 12212.331 - 12264.970: 95.5693% ( 3) 00:10:09.224 12264.970 - 12317.610: 95.6050% ( 5) 00:10:09.224 12317.610 - 12370.249: 95.6264% ( 3) 00:10:09.224 12370.249 - 12422.888: 95.6550% ( 4) 00:10:09.224 12422.888 - 12475.528: 95.6835% ( 4) 00:10:09.224 12475.528 - 12528.167: 95.7192% ( 5) 00:10:09.224 12528.167 - 12580.806: 95.7334% ( 2) 00:10:09.224 12580.806 - 12633.446: 95.7620% ( 4) 00:10:09.224 12633.446 - 12686.085: 95.7905% ( 4) 00:10:09.224 12686.085 - 12738.724: 95.8119% ( 3) 00:10:09.224 12738.724 - 12791.364: 95.8333% ( 3) 00:10:09.224 12791.364 - 12844.003: 95.8547% ( 3) 00:10:09.224 12844.003 - 12896.643: 95.8690% ( 2) 00:10:09.224 12896.643 - 12949.282: 95.8975% ( 4) 00:10:09.224 12949.282 - 13001.921: 95.9047% ( 1) 00:10:09.224 13001.921 - 13054.561: 95.9261% ( 3) 00:10:09.224 13054.561 - 13107.200: 95.9475% ( 3) 00:10:09.224 13107.200 - 13159.839: 95.9689% ( 3) 00:10:09.224 13159.839 - 13212.479: 95.9903% ( 3) 00:10:09.224 13212.479 - 13265.118: 96.0117% ( 3) 00:10:09.224 13265.118 - 13317.757: 96.0188% ( 1) 00:10:09.224 13317.757 - 13370.397: 96.0331% ( 2) 00:10:09.224 13370.397 - 13423.036: 96.0402% ( 1) 00:10:09.224 13423.036 - 13475.676: 96.0474% ( 1) 00:10:09.224 13475.676 - 13580.954: 96.0759% ( 4) 00:10:09.224 13580.954 - 13686.233: 96.1045% ( 4) 00:10:09.224 13686.233 - 13791.512: 96.1401% ( 5) 00:10:09.224 13791.512 - 13896.790: 96.2186% ( 11) 00:10:09.224 13896.790 - 14002.069: 96.3042% ( 12) 00:10:09.224 14002.069 - 14107.348: 96.4469% ( 20) 00:10:09.224 14107.348 - 14212.627: 96.5611% ( 16) 00:10:09.224 14212.627 - 14317.905: 96.7252% ( 23) 00:10:09.224 14317.905 - 14423.184: 96.8893% ( 23) 00:10:09.224 14423.184 - 14528.463: 97.0462% ( 22) 00:10:09.224 14528.463 - 14633.741: 97.2032% ( 22) 00:10:09.224 14633.741 - 14739.020: 97.3887% ( 26) 00:10:09.224 14739.020 - 14844.299: 97.5528% ( 23) 00:10:09.224 14844.299 - 14949.578: 97.7526% ( 28) 00:10:09.224 14949.578 - 15054.856: 97.9238% ( 24) 00:10:09.224 15054.856 - 15160.135: 98.0808% ( 22) 00:10:09.224 15160.135 - 15265.414: 98.2520% ( 24) 00:10:09.224 15265.414 - 15370.692: 98.3804% ( 18) 00:10:09.224 15370.692 - 15475.971: 98.4660% ( 12) 00:10:09.224 15475.971 - 15581.250: 98.5160% ( 7) 00:10:09.224 15581.250 - 15686.529: 98.5588% ( 6) 00:10:09.224 15686.529 - 15791.807: 98.6016% ( 6) 00:10:09.224 15791.807 - 15897.086: 98.6087% ( 1) 00:10:09.224 15897.086 - 16002.365: 98.6301% ( 3) 00:10:09.224 18107.939 - 18213.218: 98.6515% ( 3) 00:10:09.224 18213.218 - 18318.496: 98.6872% ( 5) 00:10:09.224 18318.496 - 18423.775: 98.7229% ( 5) 00:10:09.224 18423.775 - 18529.054: 98.7586% ( 5) 00:10:09.224 18529.054 - 18634.333: 98.7942% ( 5) 00:10:09.224 18634.333 - 18739.611: 98.8442% ( 7) 00:10:09.224 18739.611 - 18844.890: 98.8656% ( 3) 00:10:09.224 18844.890 - 18950.169: 98.9013% ( 5) 00:10:09.224 18950.169 - 19055.447: 98.9369% ( 5) 00:10:09.224 19055.447 - 19160.726: 98.9869% ( 7) 00:10:09.224 19160.726 - 19266.005: 99.0083% ( 3) 00:10:09.224 19266.005 - 19371.284: 99.0511% ( 6) 00:10:09.224 19371.284 - 19476.562: 99.0868% ( 5) 00:10:09.224 42953.716 - 43164.273: 99.1296% ( 6) 00:10:09.224 43164.273 - 43374.831: 99.1795% ( 7) 00:10:09.224 43374.831 - 43585.388: 99.2223% ( 6) 00:10:09.224 43585.388 - 43795.945: 99.2723% ( 7) 00:10:09.224 43795.945 - 44006.503: 99.3151% ( 6) 00:10:09.224 44006.503 - 44217.060: 99.3721% ( 8) 00:10:09.224 44217.060 - 44427.618: 99.4221% ( 7) 00:10:09.224 44427.618 - 44638.175: 99.4720% ( 7) 00:10:09.224 44638.175 - 44848.733: 99.5148% ( 6) 00:10:09.224 44848.733 - 45059.290: 99.5434% ( 4) 00:10:09.224 49270.439 - 49480.996: 99.5862% ( 6) 00:10:09.224 49480.996 - 49691.553: 99.6433% ( 8) 00:10:09.224 49691.553 - 49902.111: 99.6932% ( 7) 00:10:09.224 49902.111 - 50112.668: 99.7432% ( 7) 00:10:09.224 50112.668 - 50323.226: 99.7860% ( 6) 00:10:09.224 50323.226 - 50533.783: 99.8359% ( 7) 00:10:09.224 50533.783 - 50744.341: 99.8858% ( 7) 00:10:09.224 50744.341 - 50954.898: 99.9358% ( 7) 00:10:09.224 50954.898 - 51165.455: 99.9929% ( 8) 00:10:09.224 51165.455 - 51376.013: 100.0000% ( 1) 00:10:09.224 00:10:09.224 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:09.224 ============================================================================== 00:10:09.224 Range in us Cumulative IO count 00:10:09.224 6685.198 - 6711.518: 0.0143% ( 2) 00:10:09.224 6711.518 - 6737.838: 0.0285% ( 2) 00:10:09.224 6737.838 - 6790.477: 0.0642% ( 5) 00:10:09.224 6790.477 - 6843.116: 0.0928% ( 4) 00:10:09.224 6843.116 - 6895.756: 0.1356% ( 6) 00:10:09.224 6895.756 - 6948.395: 0.2426% ( 15) 00:10:09.224 6948.395 - 7001.035: 0.4495% ( 29) 00:10:09.224 7001.035 - 7053.674: 0.7206% ( 38) 00:10:09.224 7053.674 - 7106.313: 1.0488% ( 46) 00:10:09.224 7106.313 - 7158.953: 1.4341% ( 54) 00:10:09.224 7158.953 - 7211.592: 2.0120% ( 81) 00:10:09.224 7211.592 - 7264.231: 2.8896% ( 123) 00:10:09.224 7264.231 - 7316.871: 3.9740% ( 152) 00:10:09.224 7316.871 - 7369.510: 5.2083% ( 173) 00:10:09.224 7369.510 - 7422.149: 6.4426% ( 173) 00:10:09.224 7422.149 - 7474.789: 8.0693% ( 228) 00:10:09.224 7474.789 - 7527.428: 9.8602% ( 251) 00:10:09.224 7527.428 - 7580.067: 11.6795% ( 255) 00:10:09.224 7580.067 - 7632.707: 13.0066% ( 186) 00:10:09.224 7632.707 - 7685.346: 14.1624% ( 162) 00:10:09.224 7685.346 - 7737.986: 15.1327% ( 136) 00:10:09.224 7737.986 - 7790.625: 16.1244% ( 139) 00:10:09.224 7790.625 - 7843.264: 16.8950% ( 108) 00:10:09.224 7843.264 - 7895.904: 17.7440% ( 119) 00:10:09.224 7895.904 - 7948.543: 18.9640% ( 171) 00:10:09.224 7948.543 - 8001.182: 20.1983% ( 173) 00:10:09.224 8001.182 - 8053.822: 21.5896% ( 195) 00:10:09.224 8053.822 - 8106.461: 22.9024% ( 184) 00:10:09.225 8106.461 - 8159.100: 24.2723% ( 192) 00:10:09.225 8159.100 - 8211.740: 25.9061% ( 229) 00:10:09.225 8211.740 - 8264.379: 27.5400% ( 229) 00:10:09.225 8264.379 - 8317.018: 29.6162% ( 291) 00:10:09.225 8317.018 - 8369.658: 31.9563% ( 328) 00:10:09.225 8369.658 - 8422.297: 34.6604% ( 379) 00:10:09.225 8422.297 - 8474.937: 37.5999% ( 412) 00:10:09.225 8474.937 - 8527.576: 41.0317% ( 481) 00:10:09.225 8527.576 - 8580.215: 44.6490% ( 507) 00:10:09.225 8580.215 - 8632.855: 48.4732% ( 536) 00:10:09.225 8632.855 - 8685.494: 52.4187% ( 553) 00:10:09.225 8685.494 - 8738.133: 56.2857% ( 542) 00:10:09.225 8738.133 - 8790.773: 59.9672% ( 516) 00:10:09.225 8790.773 - 8843.412: 63.3205% ( 470) 00:10:09.225 8843.412 - 8896.051: 66.5240% ( 449) 00:10:09.225 8896.051 - 8948.691: 69.7203% ( 448) 00:10:09.225 8948.691 - 9001.330: 72.8168% ( 434) 00:10:09.225 9001.330 - 9053.969: 75.7349% ( 409) 00:10:09.225 9053.969 - 9106.609: 78.4675% ( 383) 00:10:09.225 9106.609 - 9159.248: 80.8719% ( 337) 00:10:09.225 9159.248 - 9211.888: 82.8268% ( 274) 00:10:09.225 9211.888 - 9264.527: 84.5034% ( 235) 00:10:09.225 9264.527 - 9317.166: 85.9304% ( 200) 00:10:09.225 9317.166 - 9369.806: 86.9863% ( 148) 00:10:09.225 9369.806 - 9422.445: 87.8139% ( 116) 00:10:09.225 9422.445 - 9475.084: 88.4132% ( 84) 00:10:09.225 9475.084 - 9527.724: 88.7771% ( 51) 00:10:09.225 9527.724 - 9580.363: 89.0625% ( 40) 00:10:09.225 9580.363 - 9633.002: 89.2908% ( 32) 00:10:09.225 9633.002 - 9685.642: 89.5833% ( 41) 00:10:09.225 9685.642 - 9738.281: 89.9044% ( 45) 00:10:09.225 9738.281 - 9790.920: 90.1612% ( 36) 00:10:09.225 9790.920 - 9843.560: 90.4395% ( 39) 00:10:09.225 9843.560 - 9896.199: 90.7320% ( 41) 00:10:09.225 9896.199 - 9948.839: 90.9817% ( 35) 00:10:09.225 9948.839 - 10001.478: 91.2172% ( 33) 00:10:09.225 10001.478 - 10054.117: 91.4526% ( 33) 00:10:09.225 10054.117 - 10106.757: 91.6738% ( 31) 00:10:09.225 10106.757 - 10159.396: 91.8308% ( 22) 00:10:09.225 10159.396 - 10212.035: 92.0305% ( 28) 00:10:09.225 10212.035 - 10264.675: 92.2160% ( 26) 00:10:09.225 10264.675 - 10317.314: 92.3801% ( 23) 00:10:09.225 10317.314 - 10369.953: 92.5442% ( 23) 00:10:09.225 10369.953 - 10422.593: 92.7155% ( 24) 00:10:09.225 10422.593 - 10475.232: 92.8439% ( 18) 00:10:09.225 10475.232 - 10527.871: 92.9295% ( 12) 00:10:09.225 10527.871 - 10580.511: 92.9937% ( 9) 00:10:09.225 10580.511 - 10633.150: 93.0722% ( 11) 00:10:09.225 10633.150 - 10685.790: 93.1364% ( 9) 00:10:09.225 10685.790 - 10738.429: 93.1792% ( 6) 00:10:09.225 10738.429 - 10791.068: 93.2720% ( 13) 00:10:09.225 10791.068 - 10843.708: 93.3148% ( 6) 00:10:09.225 10843.708 - 10896.347: 93.3719% ( 8) 00:10:09.225 10896.347 - 10948.986: 93.4432% ( 10) 00:10:09.225 10948.986 - 11001.626: 93.5217% ( 11) 00:10:09.225 11001.626 - 11054.265: 93.6216% ( 14) 00:10:09.225 11054.265 - 11106.904: 93.6858% ( 9) 00:10:09.225 11106.904 - 11159.544: 93.7714% ( 12) 00:10:09.225 11159.544 - 11212.183: 93.8642% ( 13) 00:10:09.225 11212.183 - 11264.822: 93.9997% ( 19) 00:10:09.225 11264.822 - 11317.462: 94.0782% ( 11) 00:10:09.225 11317.462 - 11370.101: 94.1638% ( 12) 00:10:09.225 11370.101 - 11422.741: 94.2423% ( 11) 00:10:09.225 11422.741 - 11475.380: 94.3422% ( 14) 00:10:09.225 11475.380 - 11528.019: 94.4278% ( 12) 00:10:09.225 11528.019 - 11580.659: 94.5134% ( 12) 00:10:09.225 11580.659 - 11633.298: 94.6133% ( 14) 00:10:09.225 11633.298 - 11685.937: 94.6846% ( 10) 00:10:09.225 11685.937 - 11738.577: 94.7774% ( 13) 00:10:09.225 11738.577 - 11791.216: 94.8773% ( 14) 00:10:09.225 11791.216 - 11843.855: 94.9415% ( 9) 00:10:09.225 11843.855 - 11896.495: 95.0200% ( 11) 00:10:09.225 11896.495 - 11949.134: 95.0842% ( 9) 00:10:09.225 11949.134 - 12001.773: 95.1555% ( 10) 00:10:09.225 12001.773 - 12054.413: 95.2340% ( 11) 00:10:09.225 12054.413 - 12107.052: 95.3054% ( 10) 00:10:09.225 12107.052 - 12159.692: 95.3624% ( 8) 00:10:09.225 12159.692 - 12212.331: 95.4267% ( 9) 00:10:09.225 12212.331 - 12264.970: 95.4837% ( 8) 00:10:09.225 12264.970 - 12317.610: 95.5408% ( 8) 00:10:09.225 12317.610 - 12370.249: 95.5836% ( 6) 00:10:09.225 12370.249 - 12422.888: 95.6336% ( 7) 00:10:09.225 12422.888 - 12475.528: 95.6835% ( 7) 00:10:09.225 12475.528 - 12528.167: 95.7549% ( 10) 00:10:09.225 12528.167 - 12580.806: 95.8048% ( 7) 00:10:09.225 12580.806 - 12633.446: 95.8547% ( 7) 00:10:09.225 12633.446 - 12686.085: 95.9047% ( 7) 00:10:09.225 12686.085 - 12738.724: 95.9404% ( 5) 00:10:09.225 12738.724 - 12791.364: 95.9618% ( 3) 00:10:09.225 12791.364 - 12844.003: 95.9903% ( 4) 00:10:09.225 12844.003 - 12896.643: 96.0117% ( 3) 00:10:09.225 12896.643 - 12949.282: 96.0260% ( 2) 00:10:09.225 12949.282 - 13001.921: 96.0331% ( 1) 00:10:09.225 13001.921 - 13054.561: 96.0474% ( 2) 00:10:09.225 13054.561 - 13107.200: 96.0616% ( 2) 00:10:09.225 13107.200 - 13159.839: 96.0759% ( 2) 00:10:09.225 13159.839 - 13212.479: 96.0902% ( 2) 00:10:09.225 13212.479 - 13265.118: 96.1045% ( 2) 00:10:09.225 13265.118 - 13317.757: 96.1259% ( 3) 00:10:09.225 13317.757 - 13370.397: 96.1544% ( 4) 00:10:09.225 13370.397 - 13423.036: 96.1758% ( 3) 00:10:09.225 13423.036 - 13475.676: 96.2043% ( 4) 00:10:09.225 13475.676 - 13580.954: 96.2400% ( 5) 00:10:09.225 13580.954 - 13686.233: 96.2900% ( 7) 00:10:09.225 13686.233 - 13791.512: 96.3898% ( 14) 00:10:09.225 13791.512 - 13896.790: 96.4826% ( 13) 00:10:09.225 13896.790 - 14002.069: 96.5611% ( 11) 00:10:09.225 14002.069 - 14107.348: 96.6396% ( 11) 00:10:09.225 14107.348 - 14212.627: 96.7323% ( 13) 00:10:09.225 14212.627 - 14317.905: 96.8179% ( 12) 00:10:09.225 14317.905 - 14423.184: 96.9392% ( 17) 00:10:09.225 14423.184 - 14528.463: 97.0605% ( 17) 00:10:09.225 14528.463 - 14633.741: 97.1390% ( 11) 00:10:09.225 14633.741 - 14739.020: 97.2460% ( 15) 00:10:09.225 14739.020 - 14844.299: 97.3673% ( 17) 00:10:09.225 14844.299 - 14949.578: 97.4886% ( 17) 00:10:09.225 14949.578 - 15054.856: 97.6170% ( 18) 00:10:09.225 15054.856 - 15160.135: 97.7882% ( 24) 00:10:09.225 15160.135 - 15265.414: 97.9095% ( 17) 00:10:09.225 15265.414 - 15370.692: 98.0380% ( 18) 00:10:09.225 15370.692 - 15475.971: 98.1592% ( 17) 00:10:09.225 15475.971 - 15581.250: 98.2734% ( 16) 00:10:09.225 15581.250 - 15686.529: 98.3590% ( 12) 00:10:09.225 15686.529 - 15791.807: 98.4304% ( 10) 00:10:09.225 15791.807 - 15897.086: 98.5017% ( 10) 00:10:09.225 15897.086 - 16002.365: 98.5374% ( 5) 00:10:09.225 16002.365 - 16107.643: 98.5659% ( 4) 00:10:09.225 16107.643 - 16212.922: 98.6016% ( 5) 00:10:09.225 16212.922 - 16318.201: 98.6301% ( 4) 00:10:09.225 17055.152 - 17160.431: 98.6444% ( 2) 00:10:09.225 17160.431 - 17265.709: 98.6801% ( 5) 00:10:09.225 17265.709 - 17370.988: 98.7158% ( 5) 00:10:09.225 17370.988 - 17476.267: 98.7586% ( 6) 00:10:09.225 17476.267 - 17581.545: 98.8014% ( 6) 00:10:09.225 17581.545 - 17686.824: 98.8513% ( 7) 00:10:09.225 17686.824 - 17792.103: 98.8941% ( 6) 00:10:09.225 17792.103 - 17897.382: 98.9369% ( 6) 00:10:09.225 17897.382 - 18002.660: 98.9797% ( 6) 00:10:09.225 18002.660 - 18107.939: 99.0297% ( 7) 00:10:09.225 18107.939 - 18213.218: 99.0654% ( 5) 00:10:09.225 18213.218 - 18318.496: 99.0868% ( 3) 00:10:09.225 40848.141 - 41058.699: 99.1296% ( 6) 00:10:09.225 41058.699 - 41269.256: 99.1724% ( 6) 00:10:09.225 41269.256 - 41479.814: 99.2295% ( 8) 00:10:09.225 41479.814 - 41690.371: 99.2723% ( 6) 00:10:09.225 41690.371 - 41900.929: 99.3293% ( 8) 00:10:09.225 41900.929 - 42111.486: 99.3793% ( 7) 00:10:09.225 42111.486 - 42322.043: 99.4292% ( 7) 00:10:09.226 42322.043 - 42532.601: 99.4792% ( 7) 00:10:09.226 42532.601 - 42743.158: 99.5291% ( 7) 00:10:09.226 42743.158 - 42953.716: 99.5434% ( 2) 00:10:09.226 46954.307 - 47164.864: 99.5648% ( 3) 00:10:09.226 47164.864 - 47375.422: 99.6147% ( 7) 00:10:09.226 47375.422 - 47585.979: 99.6647% ( 7) 00:10:09.226 47585.979 - 47796.537: 99.7217% ( 8) 00:10:09.226 47796.537 - 48007.094: 99.7717% ( 7) 00:10:09.226 48007.094 - 48217.651: 99.8288% ( 8) 00:10:09.226 48217.651 - 48428.209: 99.8858% ( 8) 00:10:09.226 48428.209 - 48638.766: 99.9358% ( 7) 00:10:09.226 48638.766 - 48849.324: 99.9857% ( 7) 00:10:09.226 48849.324 - 49059.881: 100.0000% ( 2) 00:10:09.226 00:10:09.226 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:09.226 ============================================================================== 00:10:09.226 Range in us Cumulative IO count 00:10:09.226 6895.756 - 6948.395: 0.0285% ( 4) 00:10:09.226 6948.395 - 7001.035: 0.0642% ( 5) 00:10:09.226 7001.035 - 7053.674: 0.1142% ( 7) 00:10:09.226 7053.674 - 7106.313: 0.3139% ( 28) 00:10:09.226 7106.313 - 7158.953: 0.7491% ( 61) 00:10:09.226 7158.953 - 7211.592: 1.3770% ( 88) 00:10:09.226 7211.592 - 7264.231: 2.2831% ( 127) 00:10:09.226 7264.231 - 7316.871: 3.4960% ( 170) 00:10:09.226 7316.871 - 7369.510: 5.1013% ( 225) 00:10:09.226 7369.510 - 7422.149: 6.6852% ( 222) 00:10:09.226 7422.149 - 7474.789: 8.6259% ( 272) 00:10:09.226 7474.789 - 7527.428: 10.2811% ( 232) 00:10:09.226 7527.428 - 7580.067: 12.0291% ( 245) 00:10:09.226 7580.067 - 7632.707: 13.5773% ( 217) 00:10:09.226 7632.707 - 7685.346: 14.8330% ( 176) 00:10:09.226 7685.346 - 7737.986: 15.7106% ( 123) 00:10:09.226 7737.986 - 7790.625: 16.6096% ( 126) 00:10:09.226 7790.625 - 7843.264: 17.4301% ( 115) 00:10:09.226 7843.264 - 7895.904: 18.0793% ( 91) 00:10:09.226 7895.904 - 7948.543: 19.0853% ( 141) 00:10:09.226 7948.543 - 8001.182: 20.2197% ( 159) 00:10:09.226 8001.182 - 8053.822: 21.3399% ( 157) 00:10:09.226 8053.822 - 8106.461: 22.5671% ( 172) 00:10:09.226 8106.461 - 8159.100: 23.9155% ( 189) 00:10:09.226 8159.100 - 8211.740: 25.4209% ( 211) 00:10:09.226 8211.740 - 8264.379: 27.3116% ( 265) 00:10:09.226 8264.379 - 8317.018: 29.3807% ( 290) 00:10:09.226 8317.018 - 8369.658: 31.7708% ( 335) 00:10:09.226 8369.658 - 8422.297: 34.6675% ( 406) 00:10:09.226 8422.297 - 8474.937: 37.8282% ( 443) 00:10:09.226 8474.937 - 8527.576: 41.2671% ( 482) 00:10:09.226 8527.576 - 8580.215: 44.8987% ( 509) 00:10:09.226 8580.215 - 8632.855: 48.5873% ( 517) 00:10:09.226 8632.855 - 8685.494: 52.4757% ( 545) 00:10:09.226 8685.494 - 8738.133: 56.4070% ( 551) 00:10:09.226 8738.133 - 8790.773: 60.1527% ( 525) 00:10:09.226 8790.773 - 8843.412: 63.6630% ( 492) 00:10:09.226 8843.412 - 8896.051: 67.0662% ( 477) 00:10:09.226 8896.051 - 8948.691: 70.4195% ( 470) 00:10:09.226 8948.691 - 9001.330: 73.5659% ( 441) 00:10:09.226 9001.330 - 9053.969: 76.5054% ( 412) 00:10:09.226 9053.969 - 9106.609: 79.1952% ( 377) 00:10:09.226 9106.609 - 9159.248: 81.4997% ( 323) 00:10:09.226 9159.248 - 9211.888: 83.4189% ( 269) 00:10:09.226 9211.888 - 9264.527: 85.0100% ( 223) 00:10:09.226 9264.527 - 9317.166: 86.2586% ( 175) 00:10:09.226 9317.166 - 9369.806: 87.2217% ( 135) 00:10:09.226 9369.806 - 9422.445: 88.0636% ( 118) 00:10:09.226 9422.445 - 9475.084: 88.6701% ( 85) 00:10:09.226 9475.084 - 9527.724: 89.1410% ( 66) 00:10:09.226 9527.724 - 9580.363: 89.4977% ( 50) 00:10:09.226 9580.363 - 9633.002: 89.7831% ( 40) 00:10:09.226 9633.002 - 9685.642: 90.1113% ( 46) 00:10:09.226 9685.642 - 9738.281: 90.3610% ( 35) 00:10:09.226 9738.281 - 9790.920: 90.6250% ( 37) 00:10:09.226 9790.920 - 9843.560: 90.8390% ( 30) 00:10:09.226 9843.560 - 9896.199: 91.0602% ( 31) 00:10:09.226 9896.199 - 9948.839: 91.2529% ( 27) 00:10:09.226 9948.839 - 10001.478: 91.4384% ( 26) 00:10:09.226 10001.478 - 10054.117: 91.6310% ( 27) 00:10:09.226 10054.117 - 10106.757: 91.8022% ( 24) 00:10:09.226 10106.757 - 10159.396: 91.9592% ( 22) 00:10:09.226 10159.396 - 10212.035: 92.1304% ( 24) 00:10:09.226 10212.035 - 10264.675: 92.2803% ( 21) 00:10:09.226 10264.675 - 10317.314: 92.4087% ( 18) 00:10:09.226 10317.314 - 10369.953: 92.5371% ( 18) 00:10:09.226 10369.953 - 10422.593: 92.6299% ( 13) 00:10:09.226 10422.593 - 10475.232: 92.7012% ( 10) 00:10:09.226 10475.232 - 10527.871: 92.7583% ( 8) 00:10:09.226 10527.871 - 10580.511: 92.8225% ( 9) 00:10:09.226 10580.511 - 10633.150: 92.8724% ( 7) 00:10:09.226 10633.150 - 10685.790: 92.9295% ( 8) 00:10:09.226 10685.790 - 10738.429: 93.0080% ( 11) 00:10:09.226 10738.429 - 10791.068: 93.1007% ( 13) 00:10:09.226 10791.068 - 10843.708: 93.1721% ( 10) 00:10:09.226 10843.708 - 10896.347: 93.2434% ( 10) 00:10:09.226 10896.347 - 10948.986: 93.3148% ( 10) 00:10:09.226 10948.986 - 11001.626: 93.3719% ( 8) 00:10:09.226 11001.626 - 11054.265: 93.4218% ( 7) 00:10:09.226 11054.265 - 11106.904: 93.4932% ( 10) 00:10:09.226 11106.904 - 11159.544: 93.5431% ( 7) 00:10:09.226 11159.544 - 11212.183: 93.6287% ( 12) 00:10:09.226 11212.183 - 11264.822: 93.7286% ( 14) 00:10:09.226 11264.822 - 11317.462: 93.8213% ( 13) 00:10:09.226 11317.462 - 11370.101: 93.9284% ( 15) 00:10:09.226 11370.101 - 11422.741: 94.0283% ( 14) 00:10:09.226 11422.741 - 11475.380: 94.0996% ( 10) 00:10:09.226 11475.380 - 11528.019: 94.1781% ( 11) 00:10:09.226 11528.019 - 11580.659: 94.2780% ( 14) 00:10:09.226 11580.659 - 11633.298: 94.3636% ( 12) 00:10:09.226 11633.298 - 11685.937: 94.4635% ( 14) 00:10:09.226 11685.937 - 11738.577: 94.5634% ( 14) 00:10:09.226 11738.577 - 11791.216: 94.6846% ( 17) 00:10:09.226 11791.216 - 11843.855: 94.7988% ( 16) 00:10:09.226 11843.855 - 11896.495: 94.8916% ( 13) 00:10:09.226 11896.495 - 11949.134: 94.9843% ( 13) 00:10:09.226 11949.134 - 12001.773: 95.0842% ( 14) 00:10:09.226 12001.773 - 12054.413: 95.1413% ( 8) 00:10:09.226 12054.413 - 12107.052: 95.1841% ( 6) 00:10:09.226 12107.052 - 12159.692: 95.2269% ( 6) 00:10:09.226 12159.692 - 12212.331: 95.2840% ( 8) 00:10:09.226 12212.331 - 12264.970: 95.3339% ( 7) 00:10:09.226 12264.970 - 12317.610: 95.3767% ( 6) 00:10:09.226 12317.610 - 12370.249: 95.4195% ( 6) 00:10:09.226 12370.249 - 12422.888: 95.4766% ( 8) 00:10:09.226 12422.888 - 12475.528: 95.5337% ( 8) 00:10:09.226 12475.528 - 12528.167: 95.5836% ( 7) 00:10:09.226 12528.167 - 12580.806: 95.6193% ( 5) 00:10:09.226 12580.806 - 12633.446: 95.6764% ( 8) 00:10:09.226 12633.446 - 12686.085: 95.7120% ( 5) 00:10:09.226 12686.085 - 12738.724: 95.7620% ( 7) 00:10:09.226 12738.724 - 12791.364: 95.8048% ( 6) 00:10:09.226 12791.364 - 12844.003: 95.8405% ( 5) 00:10:09.226 12844.003 - 12896.643: 95.8833% ( 6) 00:10:09.226 12896.643 - 12949.282: 95.9189% ( 5) 00:10:09.226 12949.282 - 13001.921: 95.9689% ( 7) 00:10:09.226 13001.921 - 13054.561: 96.0331% ( 9) 00:10:09.226 13054.561 - 13107.200: 96.0902% ( 8) 00:10:09.226 13107.200 - 13159.839: 96.1615% ( 10) 00:10:09.227 13159.839 - 13212.479: 96.2400% ( 11) 00:10:09.227 13212.479 - 13265.118: 96.2900% ( 7) 00:10:09.227 13265.118 - 13317.757: 96.3328% ( 6) 00:10:09.227 13317.757 - 13370.397: 96.3684% ( 5) 00:10:09.227 13370.397 - 13423.036: 96.4184% ( 7) 00:10:09.227 13423.036 - 13475.676: 96.4612% ( 6) 00:10:09.227 13475.676 - 13580.954: 96.5539% ( 13) 00:10:09.227 13580.954 - 13686.233: 96.6467% ( 13) 00:10:09.227 13686.233 - 13791.512: 96.7394% ( 13) 00:10:09.227 13791.512 - 13896.790: 96.8393% ( 14) 00:10:09.227 13896.790 - 14002.069: 96.9178% ( 11) 00:10:09.227 14002.069 - 14107.348: 97.0106% ( 13) 00:10:09.227 14107.348 - 14212.627: 97.0890% ( 11) 00:10:09.227 14212.627 - 14317.905: 97.1318% ( 6) 00:10:09.227 14317.905 - 14423.184: 97.1604% ( 4) 00:10:09.227 14423.184 - 14528.463: 97.1747% ( 2) 00:10:09.227 14528.463 - 14633.741: 97.2032% ( 4) 00:10:09.227 14633.741 - 14739.020: 97.2603% ( 8) 00:10:09.227 14739.020 - 14844.299: 97.3245% ( 9) 00:10:09.227 14844.299 - 14949.578: 97.3744% ( 7) 00:10:09.227 14949.578 - 15054.856: 97.4030% ( 4) 00:10:09.227 15054.856 - 15160.135: 97.4743% ( 10) 00:10:09.227 15160.135 - 15265.414: 97.5314% ( 8) 00:10:09.227 15265.414 - 15370.692: 97.6027% ( 10) 00:10:09.227 15370.692 - 15475.971: 97.6670% ( 9) 00:10:09.227 15475.971 - 15581.250: 97.7383% ( 10) 00:10:09.227 15581.250 - 15686.529: 97.8025% ( 9) 00:10:09.227 15686.529 - 15791.807: 97.8739% ( 10) 00:10:09.227 15791.807 - 15897.086: 97.9452% ( 10) 00:10:09.227 15897.086 - 16002.365: 97.9951% ( 7) 00:10:09.227 16002.365 - 16107.643: 98.0522% ( 8) 00:10:09.227 16107.643 - 16212.922: 98.1378% ( 12) 00:10:09.227 16212.922 - 16318.201: 98.2663% ( 18) 00:10:09.227 16318.201 - 16423.480: 98.3804% ( 16) 00:10:09.227 16423.480 - 16528.758: 98.4946% ( 16) 00:10:09.227 16528.758 - 16634.037: 98.6230% ( 18) 00:10:09.227 16634.037 - 16739.316: 98.7158% ( 13) 00:10:09.227 16739.316 - 16844.594: 98.8156% ( 14) 00:10:09.227 16844.594 - 16949.873: 98.9084% ( 13) 00:10:09.227 16949.873 - 17055.152: 98.9655% ( 8) 00:10:09.227 17055.152 - 17160.431: 99.0154% ( 7) 00:10:09.227 17160.431 - 17265.709: 99.0582% ( 6) 00:10:09.227 17265.709 - 17370.988: 99.0868% ( 4) 00:10:09.227 38953.124 - 39163.682: 99.1153% ( 4) 00:10:09.227 39163.682 - 39374.239: 99.1652% ( 7) 00:10:09.227 39374.239 - 39584.797: 99.2152% ( 7) 00:10:09.227 39584.797 - 39795.354: 99.2723% ( 8) 00:10:09.227 39795.354 - 40005.912: 99.3151% ( 6) 00:10:09.227 40005.912 - 40216.469: 99.3721% ( 8) 00:10:09.227 40216.469 - 40427.027: 99.4221% ( 7) 00:10:09.227 40427.027 - 40637.584: 99.4720% ( 7) 00:10:09.227 40637.584 - 40848.141: 99.5291% ( 8) 00:10:09.227 40848.141 - 41058.699: 99.5434% ( 2) 00:10:09.227 45269.847 - 45480.405: 99.5719% ( 4) 00:10:09.227 45480.405 - 45690.962: 99.6219% ( 7) 00:10:09.227 45690.962 - 45901.520: 99.6718% ( 7) 00:10:09.227 45901.520 - 46112.077: 99.7217% ( 7) 00:10:09.227 46112.077 - 46322.635: 99.7717% ( 7) 00:10:09.227 46322.635 - 46533.192: 99.8145% ( 6) 00:10:09.227 46533.192 - 46743.749: 99.8644% ( 7) 00:10:09.227 46743.749 - 46954.307: 99.9144% ( 7) 00:10:09.227 46954.307 - 47164.864: 99.9643% ( 7) 00:10:09.227 47164.864 - 47375.422: 100.0000% ( 5) 00:10:09.227 00:10:09.227 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:09.227 ============================================================================== 00:10:09.227 Range in us Cumulative IO count 00:10:09.227 6790.477 - 6843.116: 0.0214% ( 3) 00:10:09.227 6843.116 - 6895.756: 0.0856% ( 9) 00:10:09.227 6895.756 - 6948.395: 0.1855% ( 14) 00:10:09.227 6948.395 - 7001.035: 0.2925% ( 15) 00:10:09.227 7001.035 - 7053.674: 0.4780% ( 26) 00:10:09.227 7053.674 - 7106.313: 0.8205% ( 48) 00:10:09.227 7106.313 - 7158.953: 1.2771% ( 64) 00:10:09.227 7158.953 - 7211.592: 1.8907% ( 86) 00:10:09.227 7211.592 - 7264.231: 2.8039% ( 128) 00:10:09.227 7264.231 - 7316.871: 3.7457% ( 132) 00:10:09.227 7316.871 - 7369.510: 5.0728% ( 186) 00:10:09.227 7369.510 - 7422.149: 6.5497% ( 207) 00:10:09.227 7422.149 - 7474.789: 8.0551% ( 211) 00:10:09.227 7474.789 - 7527.428: 9.5890% ( 215) 00:10:09.227 7527.428 - 7580.067: 11.2586% ( 234) 00:10:09.227 7580.067 - 7632.707: 12.7426% ( 208) 00:10:09.227 7632.707 - 7685.346: 14.1909% ( 203) 00:10:09.227 7685.346 - 7737.986: 15.3896% ( 168) 00:10:09.227 7737.986 - 7790.625: 16.2600% ( 122) 00:10:09.227 7790.625 - 7843.264: 17.0305% ( 108) 00:10:09.227 7843.264 - 7895.904: 17.7868% ( 106) 00:10:09.227 7895.904 - 7948.543: 18.8285% ( 146) 00:10:09.227 7948.543 - 8001.182: 20.1484% ( 185) 00:10:09.227 8001.182 - 8053.822: 21.5183% ( 192) 00:10:09.227 8053.822 - 8106.461: 22.7740% ( 176) 00:10:09.227 8106.461 - 8159.100: 24.0654% ( 181) 00:10:09.227 8159.100 - 8211.740: 25.5280% ( 205) 00:10:09.227 8211.740 - 8264.379: 27.2760% ( 245) 00:10:09.227 8264.379 - 8317.018: 29.3522% ( 291) 00:10:09.227 8317.018 - 8369.658: 31.7280% ( 333) 00:10:09.227 8369.658 - 8422.297: 34.5962% ( 402) 00:10:09.227 8422.297 - 8474.937: 37.6712% ( 431) 00:10:09.227 8474.937 - 8527.576: 41.1815% ( 492) 00:10:09.227 8527.576 - 8580.215: 44.7917% ( 506) 00:10:09.227 8580.215 - 8632.855: 48.5731% ( 530) 00:10:09.227 8632.855 - 8685.494: 52.5828% ( 562) 00:10:09.227 8685.494 - 8738.133: 56.3927% ( 534) 00:10:09.227 8738.133 - 8790.773: 60.0813% ( 517) 00:10:09.227 8790.773 - 8843.412: 63.6130% ( 495) 00:10:09.227 8843.412 - 8896.051: 67.0947% ( 488) 00:10:09.227 8896.051 - 8948.691: 70.3981% ( 463) 00:10:09.227 8948.691 - 9001.330: 73.5374% ( 440) 00:10:09.227 9001.330 - 9053.969: 76.4912% ( 414) 00:10:09.227 9053.969 - 9106.609: 79.1667% ( 375) 00:10:09.227 9106.609 - 9159.248: 81.4212% ( 316) 00:10:09.227 9159.248 - 9211.888: 83.4189% ( 280) 00:10:09.227 9211.888 - 9264.527: 85.0528% ( 229) 00:10:09.227 9264.527 - 9317.166: 86.3584% ( 183) 00:10:09.227 9317.166 - 9369.806: 87.4287% ( 150) 00:10:09.227 9369.806 - 9422.445: 88.2349% ( 113) 00:10:09.227 9422.445 - 9475.084: 88.8271% ( 83) 00:10:09.227 9475.084 - 9527.724: 89.2337% ( 57) 00:10:09.227 9527.724 - 9580.363: 89.5833% ( 49) 00:10:09.227 9580.363 - 9633.002: 89.8830% ( 42) 00:10:09.227 9633.002 - 9685.642: 90.1684% ( 40) 00:10:09.227 9685.642 - 9738.281: 90.4466% ( 39) 00:10:09.227 9738.281 - 9790.920: 90.7178% ( 38) 00:10:09.227 9790.920 - 9843.560: 90.9389% ( 31) 00:10:09.227 9843.560 - 9896.199: 91.1173% ( 25) 00:10:09.227 9896.199 - 9948.839: 91.3313% ( 30) 00:10:09.227 9948.839 - 10001.478: 91.5097% ( 25) 00:10:09.227 10001.478 - 10054.117: 91.6881% ( 25) 00:10:09.227 10054.117 - 10106.757: 91.8664% ( 25) 00:10:09.227 10106.757 - 10159.396: 92.0448% ( 25) 00:10:09.227 10159.396 - 10212.035: 92.1946% ( 21) 00:10:09.227 10212.035 - 10264.675: 92.3373% ( 20) 00:10:09.227 10264.675 - 10317.314: 92.4729% ( 19) 00:10:09.227 10317.314 - 10369.953: 92.5942% ( 17) 00:10:09.227 10369.953 - 10422.593: 92.7155% ( 17) 00:10:09.227 10422.593 - 10475.232: 92.8011% ( 12) 00:10:09.227 10475.232 - 10527.871: 92.8796% ( 11) 00:10:09.227 10527.871 - 10580.511: 92.9652% ( 12) 00:10:09.227 10580.511 - 10633.150: 93.0437% ( 11) 00:10:09.228 10633.150 - 10685.790: 93.1293% ( 12) 00:10:09.228 10685.790 - 10738.429: 93.2006% ( 10) 00:10:09.228 10738.429 - 10791.068: 93.2862% ( 12) 00:10:09.228 10791.068 - 10843.708: 93.3647% ( 11) 00:10:09.228 10843.708 - 10896.347: 93.4432% ( 11) 00:10:09.228 10896.347 - 10948.986: 93.5146% ( 10) 00:10:09.228 10948.986 - 11001.626: 93.5788% ( 9) 00:10:09.228 11001.626 - 11054.265: 93.6358% ( 8) 00:10:09.228 11054.265 - 11106.904: 93.7072% ( 10) 00:10:09.228 11106.904 - 11159.544: 93.7643% ( 8) 00:10:09.228 11159.544 - 11212.183: 93.7999% ( 5) 00:10:09.228 11212.183 - 11264.822: 93.8570% ( 8) 00:10:09.228 11264.822 - 11317.462: 93.8998% ( 6) 00:10:09.228 11317.462 - 11370.101: 93.9498% ( 7) 00:10:09.228 11370.101 - 11422.741: 93.9997% ( 7) 00:10:09.228 11422.741 - 11475.380: 94.0425% ( 6) 00:10:09.228 11475.380 - 11528.019: 94.0853% ( 6) 00:10:09.228 11528.019 - 11580.659: 94.1424% ( 8) 00:10:09.228 11580.659 - 11633.298: 94.1852% ( 6) 00:10:09.228 11633.298 - 11685.937: 94.2352% ( 7) 00:10:09.228 11685.937 - 11738.577: 94.2994% ( 9) 00:10:09.228 11738.577 - 11791.216: 94.3422% ( 6) 00:10:09.228 11791.216 - 11843.855: 94.3779% ( 5) 00:10:09.228 11843.855 - 11896.495: 94.4207% ( 6) 00:10:09.228 11896.495 - 11949.134: 94.4563% ( 5) 00:10:09.228 11949.134 - 12001.773: 94.4991% ( 6) 00:10:09.228 12001.773 - 12054.413: 94.5848% ( 12) 00:10:09.228 12054.413 - 12107.052: 94.6561% ( 10) 00:10:09.228 12107.052 - 12159.692: 94.7061% ( 7) 00:10:09.228 12159.692 - 12212.331: 94.7560% ( 7) 00:10:09.228 12212.331 - 12264.970: 94.8202% ( 9) 00:10:09.228 12264.970 - 12317.610: 94.8701% ( 7) 00:10:09.228 12317.610 - 12370.249: 94.9344% ( 9) 00:10:09.228 12370.249 - 12422.888: 95.0271% ( 13) 00:10:09.228 12422.888 - 12475.528: 95.0985% ( 10) 00:10:09.228 12475.528 - 12528.167: 95.1769% ( 11) 00:10:09.228 12528.167 - 12580.806: 95.2483% ( 10) 00:10:09.228 12580.806 - 12633.446: 95.3196% ( 10) 00:10:09.228 12633.446 - 12686.085: 95.4053% ( 12) 00:10:09.228 12686.085 - 12738.724: 95.4980% ( 13) 00:10:09.228 12738.724 - 12791.364: 95.5765% ( 11) 00:10:09.228 12791.364 - 12844.003: 95.6692% ( 13) 00:10:09.228 12844.003 - 12896.643: 95.7477% ( 11) 00:10:09.228 12896.643 - 12949.282: 95.8191% ( 10) 00:10:09.228 12949.282 - 13001.921: 95.8904% ( 10) 00:10:09.228 13001.921 - 13054.561: 95.9689% ( 11) 00:10:09.228 13054.561 - 13107.200: 96.0474% ( 11) 00:10:09.228 13107.200 - 13159.839: 96.1259% ( 11) 00:10:09.228 13159.839 - 13212.479: 96.2043% ( 11) 00:10:09.228 13212.479 - 13265.118: 96.2828% ( 11) 00:10:09.228 13265.118 - 13317.757: 96.3542% ( 10) 00:10:09.228 13317.757 - 13370.397: 96.4326% ( 11) 00:10:09.228 13370.397 - 13423.036: 96.4969% ( 9) 00:10:09.228 13423.036 - 13475.676: 96.5753% ( 11) 00:10:09.228 13475.676 - 13580.954: 96.7394% ( 23) 00:10:09.228 13580.954 - 13686.233: 96.8821% ( 20) 00:10:09.228 13686.233 - 13791.512: 97.0106% ( 18) 00:10:09.228 13791.512 - 13896.790: 97.1176% ( 15) 00:10:09.228 13896.790 - 14002.069: 97.1747% ( 8) 00:10:09.228 14002.069 - 14107.348: 97.1961% ( 3) 00:10:09.228 14107.348 - 14212.627: 97.2246% ( 4) 00:10:09.228 14212.627 - 14317.905: 97.2531% ( 4) 00:10:09.228 14317.905 - 14423.184: 97.2603% ( 1) 00:10:09.228 14528.463 - 14633.741: 97.2817% ( 3) 00:10:09.228 14633.741 - 14739.020: 97.2959% ( 2) 00:10:09.228 14739.020 - 14844.299: 97.3174% ( 3) 00:10:09.228 14844.299 - 14949.578: 97.3744% ( 8) 00:10:09.228 14949.578 - 15054.856: 97.4244% ( 7) 00:10:09.228 15054.856 - 15160.135: 97.4814% ( 8) 00:10:09.228 15160.135 - 15265.414: 97.5385% ( 8) 00:10:09.228 15265.414 - 15370.692: 97.5956% ( 8) 00:10:09.228 15370.692 - 15475.971: 97.6527% ( 8) 00:10:09.228 15475.971 - 15581.250: 97.7098% ( 8) 00:10:09.228 15581.250 - 15686.529: 97.7740% ( 9) 00:10:09.228 15686.529 - 15791.807: 97.8667% ( 13) 00:10:09.228 15791.807 - 15897.086: 97.9737% ( 15) 00:10:09.228 15897.086 - 16002.365: 98.0665% ( 13) 00:10:09.228 16002.365 - 16107.643: 98.1664% ( 14) 00:10:09.228 16107.643 - 16212.922: 98.2377% ( 10) 00:10:09.228 16212.922 - 16318.201: 98.3019% ( 9) 00:10:09.228 16318.201 - 16423.480: 98.3662% ( 9) 00:10:09.228 16423.480 - 16528.758: 98.4375% ( 10) 00:10:09.228 16528.758 - 16634.037: 98.5017% ( 9) 00:10:09.228 16634.037 - 16739.316: 98.5945% ( 13) 00:10:09.228 16739.316 - 16844.594: 98.7086% ( 16) 00:10:09.228 16844.594 - 16949.873: 98.7728% ( 9) 00:10:09.228 16949.873 - 17055.152: 98.8228% ( 7) 00:10:09.228 17055.152 - 17160.431: 98.8656% ( 6) 00:10:09.228 17160.431 - 17265.709: 98.9155% ( 7) 00:10:09.228 17265.709 - 17370.988: 98.9655% ( 7) 00:10:09.228 17370.988 - 17476.267: 99.0225% ( 8) 00:10:09.228 17476.267 - 17581.545: 99.0725% ( 7) 00:10:09.228 17581.545 - 17686.824: 99.0868% ( 2) 00:10:09.228 36636.993 - 36847.550: 99.1082% ( 3) 00:10:09.228 36847.550 - 37058.108: 99.1581% ( 7) 00:10:09.228 37058.108 - 37268.665: 99.2152% ( 8) 00:10:09.228 37268.665 - 37479.222: 99.2723% ( 8) 00:10:09.228 37479.222 - 37689.780: 99.3222% ( 7) 00:10:09.228 37689.780 - 37900.337: 99.3650% ( 6) 00:10:09.228 37900.337 - 38110.895: 99.4221% ( 8) 00:10:09.228 38110.895 - 38321.452: 99.4720% ( 7) 00:10:09.228 38321.452 - 38532.010: 99.5220% ( 7) 00:10:09.228 38532.010 - 38742.567: 99.5434% ( 3) 00:10:09.228 42953.716 - 43164.273: 99.5576% ( 2) 00:10:09.228 43164.273 - 43374.831: 99.6005% ( 6) 00:10:09.228 43374.831 - 43585.388: 99.6433% ( 6) 00:10:09.228 43585.388 - 43795.945: 99.6932% ( 7) 00:10:09.228 43795.945 - 44006.503: 99.7432% ( 7) 00:10:09.228 44006.503 - 44217.060: 99.7931% ( 7) 00:10:09.228 44217.060 - 44427.618: 99.8430% ( 7) 00:10:09.228 44427.618 - 44638.175: 99.8930% ( 7) 00:10:09.228 44638.175 - 44848.733: 99.9429% ( 7) 00:10:09.228 44848.733 - 45059.290: 100.0000% ( 8) 00:10:09.228 00:10:09.228 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:09.228 ============================================================================== 00:10:09.228 Range in us Cumulative IO count 00:10:09.228 6737.838 - 6790.477: 0.0143% ( 2) 00:10:09.228 6790.477 - 6843.116: 0.0499% ( 5) 00:10:09.228 6843.116 - 6895.756: 0.0928% ( 6) 00:10:09.228 6895.756 - 6948.395: 0.1498% ( 8) 00:10:09.228 6948.395 - 7001.035: 0.2997% ( 21) 00:10:09.229 7001.035 - 7053.674: 0.5565% ( 36) 00:10:09.229 7053.674 - 7106.313: 0.9275% ( 52) 00:10:09.229 7106.313 - 7158.953: 1.4626% ( 75) 00:10:09.229 7158.953 - 7211.592: 2.1761% ( 100) 00:10:09.229 7211.592 - 7264.231: 3.0822% ( 127) 00:10:09.229 7264.231 - 7316.871: 4.3450% ( 177) 00:10:09.229 7316.871 - 7369.510: 5.5223% ( 165) 00:10:09.229 7369.510 - 7422.149: 6.9635% ( 202) 00:10:09.229 7422.149 - 7474.789: 8.3262% ( 191) 00:10:09.229 7474.789 - 7527.428: 9.6318% ( 183) 00:10:09.229 7527.428 - 7580.067: 11.0588% ( 200) 00:10:09.229 7580.067 - 7632.707: 12.3430% ( 180) 00:10:09.229 7632.707 - 7685.346: 13.5274% ( 166) 00:10:09.229 7685.346 - 7737.986: 14.5976% ( 150) 00:10:09.229 7737.986 - 7790.625: 15.6821% ( 152) 00:10:09.229 7790.625 - 7843.264: 16.6167% ( 131) 00:10:09.229 7843.264 - 7895.904: 17.7369% ( 157) 00:10:09.229 7895.904 - 7948.543: 18.9997% ( 177) 00:10:09.229 7948.543 - 8001.182: 20.2626% ( 177) 00:10:09.229 8001.182 - 8053.822: 21.6039% ( 188) 00:10:09.229 8053.822 - 8106.461: 22.8667% ( 177) 00:10:09.229 8106.461 - 8159.100: 24.2580% ( 195) 00:10:09.229 8159.100 - 8211.740: 25.6564% ( 196) 00:10:09.229 8211.740 - 8264.379: 27.3330% ( 235) 00:10:09.229 8264.379 - 8317.018: 29.4949% ( 303) 00:10:09.229 8317.018 - 8369.658: 31.7922% ( 322) 00:10:09.229 8369.658 - 8422.297: 34.5391% ( 385) 00:10:09.229 8422.297 - 8474.937: 37.7640% ( 452) 00:10:09.229 8474.937 - 8527.576: 41.0174% ( 456) 00:10:09.229 8527.576 - 8580.215: 44.5705% ( 498) 00:10:09.229 8580.215 - 8632.855: 48.5802% ( 562) 00:10:09.229 8632.855 - 8685.494: 52.4686% ( 545) 00:10:09.229 8685.494 - 8738.133: 56.2429% ( 529) 00:10:09.229 8738.133 - 8790.773: 59.9529% ( 520) 00:10:09.229 8790.773 - 8843.412: 63.3990% ( 483) 00:10:09.229 8843.412 - 8896.051: 66.7380% ( 468) 00:10:09.229 8896.051 - 8948.691: 69.9558% ( 451) 00:10:09.229 8948.691 - 9001.330: 73.1164% ( 443) 00:10:09.229 9001.330 - 9053.969: 76.0987% ( 418) 00:10:09.229 9053.969 - 9106.609: 78.8242% ( 382) 00:10:09.229 9106.609 - 9159.248: 81.1858% ( 331) 00:10:09.229 9159.248 - 9211.888: 83.1978% ( 282) 00:10:09.229 9211.888 - 9264.527: 84.7959% ( 224) 00:10:09.229 9264.527 - 9317.166: 86.0659% ( 178) 00:10:09.229 9317.166 - 9369.806: 87.1361% ( 150) 00:10:09.229 9369.806 - 9422.445: 87.9495% ( 114) 00:10:09.229 9422.445 - 9475.084: 88.5631% ( 86) 00:10:09.229 9475.084 - 9527.724: 89.0340% ( 66) 00:10:09.229 9527.724 - 9580.363: 89.4121% ( 53) 00:10:09.229 9580.363 - 9633.002: 89.7332% ( 45) 00:10:09.229 9633.002 - 9685.642: 90.0400% ( 43) 00:10:09.229 9685.642 - 9738.281: 90.3039% ( 37) 00:10:09.229 9738.281 - 9790.920: 90.4680% ( 23) 00:10:09.229 9790.920 - 9843.560: 90.6892% ( 31) 00:10:09.229 9843.560 - 9896.199: 90.8818% ( 27) 00:10:09.229 9896.199 - 9948.839: 91.1173% ( 33) 00:10:09.229 9948.839 - 10001.478: 91.3313% ( 30) 00:10:09.229 10001.478 - 10054.117: 91.5382% ( 29) 00:10:09.229 10054.117 - 10106.757: 91.7523% ( 30) 00:10:09.229 10106.757 - 10159.396: 91.9449% ( 27) 00:10:09.229 10159.396 - 10212.035: 92.1233% ( 25) 00:10:09.229 10212.035 - 10264.675: 92.3017% ( 25) 00:10:09.229 10264.675 - 10317.314: 92.4800% ( 25) 00:10:09.229 10317.314 - 10369.953: 92.6441% ( 23) 00:10:09.229 10369.953 - 10422.593: 92.7725% ( 18) 00:10:09.229 10422.593 - 10475.232: 92.8796% ( 15) 00:10:09.229 10475.232 - 10527.871: 92.9723% ( 13) 00:10:09.229 10527.871 - 10580.511: 93.0722% ( 14) 00:10:09.229 10580.511 - 10633.150: 93.1650% ( 13) 00:10:09.229 10633.150 - 10685.790: 93.2506% ( 12) 00:10:09.229 10685.790 - 10738.429: 93.3433% ( 13) 00:10:09.229 10738.429 - 10791.068: 93.4147% ( 10) 00:10:09.229 10791.068 - 10843.708: 93.4789% ( 9) 00:10:09.229 10843.708 - 10896.347: 93.5502% ( 10) 00:10:09.229 10896.347 - 10948.986: 93.6002% ( 7) 00:10:09.229 10948.986 - 11001.626: 93.6501% ( 7) 00:10:09.229 11001.626 - 11054.265: 93.6929% ( 6) 00:10:09.229 11054.265 - 11106.904: 93.7429% ( 7) 00:10:09.229 11106.904 - 11159.544: 93.7999% ( 8) 00:10:09.229 11159.544 - 11212.183: 93.8499% ( 7) 00:10:09.229 11212.183 - 11264.822: 93.8927% ( 6) 00:10:09.229 11264.822 - 11317.462: 93.9426% ( 7) 00:10:09.229 11317.462 - 11370.101: 94.0140% ( 10) 00:10:09.229 11370.101 - 11422.741: 94.0782% ( 9) 00:10:09.229 11422.741 - 11475.380: 94.1281% ( 7) 00:10:09.229 11475.380 - 11528.019: 94.1995% ( 10) 00:10:09.229 11528.019 - 11580.659: 94.2566% ( 8) 00:10:09.229 11580.659 - 11633.298: 94.2994% ( 6) 00:10:09.229 11633.298 - 11685.937: 94.3350% ( 5) 00:10:09.229 11685.937 - 11738.577: 94.3779% ( 6) 00:10:09.229 11738.577 - 11791.216: 94.4135% ( 5) 00:10:09.229 11791.216 - 11843.855: 94.4492% ( 5) 00:10:09.229 11843.855 - 11896.495: 94.4706% ( 3) 00:10:09.229 11896.495 - 11949.134: 94.5063% ( 5) 00:10:09.229 11949.134 - 12001.773: 94.5420% ( 5) 00:10:09.229 12001.773 - 12054.413: 94.5776% ( 5) 00:10:09.229 12054.413 - 12107.052: 94.6062% ( 4) 00:10:09.229 12107.052 - 12159.692: 94.6347% ( 4) 00:10:09.229 12159.692 - 12212.331: 94.6918% ( 8) 00:10:09.229 12212.331 - 12264.970: 94.7417% ( 7) 00:10:09.229 12264.970 - 12317.610: 94.7845% ( 6) 00:10:09.229 12317.610 - 12370.249: 94.8345% ( 7) 00:10:09.229 12370.249 - 12422.888: 94.8987% ( 9) 00:10:09.229 12422.888 - 12475.528: 94.9700% ( 10) 00:10:09.229 12475.528 - 12528.167: 95.0342% ( 9) 00:10:09.229 12528.167 - 12580.806: 95.0913% ( 8) 00:10:09.229 12580.806 - 12633.446: 95.1341% ( 6) 00:10:09.229 12633.446 - 12686.085: 95.1841% ( 7) 00:10:09.229 12686.085 - 12738.724: 95.2269% ( 6) 00:10:09.229 12738.724 - 12791.364: 95.2768% ( 7) 00:10:09.229 12791.364 - 12844.003: 95.3339% ( 8) 00:10:09.229 12844.003 - 12896.643: 95.4195% ( 12) 00:10:09.229 12896.643 - 12949.282: 95.5123% ( 13) 00:10:09.229 12949.282 - 13001.921: 95.6122% ( 14) 00:10:09.229 13001.921 - 13054.561: 95.7192% ( 15) 00:10:09.229 13054.561 - 13107.200: 95.8119% ( 13) 00:10:09.229 13107.200 - 13159.839: 95.8975% ( 12) 00:10:09.229 13159.839 - 13212.479: 95.9974% ( 14) 00:10:09.229 13212.479 - 13265.118: 96.0830% ( 12) 00:10:09.229 13265.118 - 13317.757: 96.1615% ( 11) 00:10:09.229 13317.757 - 13370.397: 96.2400% ( 11) 00:10:09.229 13370.397 - 13423.036: 96.3256% ( 12) 00:10:09.229 13423.036 - 13475.676: 96.4184% ( 13) 00:10:09.229 13475.676 - 13580.954: 96.5753% ( 22) 00:10:09.229 13580.954 - 13686.233: 96.7038% ( 18) 00:10:09.229 13686.233 - 13791.512: 96.8251% ( 17) 00:10:09.229 13791.512 - 13896.790: 96.9535% ( 18) 00:10:09.229 13896.790 - 14002.069: 97.0462% ( 13) 00:10:09.229 14002.069 - 14107.348: 97.1390% ( 13) 00:10:09.229 14107.348 - 14212.627: 97.2246% ( 12) 00:10:09.229 14212.627 - 14317.905: 97.3245% ( 14) 00:10:09.229 14317.905 - 14423.184: 97.3887% ( 9) 00:10:09.229 14423.184 - 14528.463: 97.4315% ( 6) 00:10:09.229 14528.463 - 14633.741: 97.4814% ( 7) 00:10:09.229 14633.741 - 14739.020: 97.5243% ( 6) 00:10:09.229 14739.020 - 14844.299: 97.5671% ( 6) 00:10:09.229 14844.299 - 14949.578: 97.6099% ( 6) 00:10:09.229 14949.578 - 15054.856: 97.6670% ( 8) 00:10:09.229 15054.856 - 15160.135: 97.7454% ( 11) 00:10:09.229 15160.135 - 15265.414: 97.8239% ( 11) 00:10:09.229 15265.414 - 15370.692: 97.9095% ( 12) 00:10:09.229 15370.692 - 15475.971: 97.9880% ( 11) 00:10:09.229 15475.971 - 15581.250: 98.0736% ( 12) 00:10:09.229 15581.250 - 15686.529: 98.1592% ( 12) 00:10:09.230 15686.529 - 15791.807: 98.2377% ( 11) 00:10:09.230 15791.807 - 15897.086: 98.3233% ( 12) 00:10:09.230 15897.086 - 16002.365: 98.3804% ( 8) 00:10:09.230 16002.365 - 16107.643: 98.4446% ( 9) 00:10:09.230 16107.643 - 16212.922: 98.5088% ( 9) 00:10:09.230 16212.922 - 16318.201: 98.5517% ( 6) 00:10:09.230 16318.201 - 16423.480: 98.5731% ( 3) 00:10:09.230 16423.480 - 16528.758: 98.5945% ( 3) 00:10:09.230 16528.758 - 16634.037: 98.6087% ( 2) 00:10:09.230 16634.037 - 16739.316: 98.6301% ( 3) 00:10:09.230 17160.431 - 17265.709: 98.6373% ( 1) 00:10:09.230 17265.709 - 17370.988: 98.6872% ( 7) 00:10:09.230 17370.988 - 17476.267: 98.7300% ( 6) 00:10:09.230 17476.267 - 17581.545: 98.7728% ( 6) 00:10:09.230 17581.545 - 17686.824: 98.8299% ( 8) 00:10:09.230 17686.824 - 17792.103: 98.8799% ( 7) 00:10:09.230 17792.103 - 17897.382: 98.9298% ( 7) 00:10:09.230 17897.382 - 18002.660: 98.9797% ( 7) 00:10:09.230 18002.660 - 18107.939: 99.0368% ( 8) 00:10:09.230 18107.939 - 18213.218: 99.0868% ( 7) 00:10:09.230 34320.861 - 34531.418: 99.1010% ( 2) 00:10:09.230 34531.418 - 34741.976: 99.1510% ( 7) 00:10:09.230 34741.976 - 34952.533: 99.2009% ( 7) 00:10:09.230 34952.533 - 35163.091: 99.2509% ( 7) 00:10:09.230 35163.091 - 35373.648: 99.3008% ( 7) 00:10:09.230 35373.648 - 35584.206: 99.3579% ( 8) 00:10:09.230 35584.206 - 35794.763: 99.4007% ( 6) 00:10:09.230 35794.763 - 36005.320: 99.4506% ( 7) 00:10:09.230 36005.320 - 36215.878: 99.5077% ( 8) 00:10:09.230 36215.878 - 36426.435: 99.5434% ( 5) 00:10:09.230 40848.141 - 41058.699: 99.5791% ( 5) 00:10:09.230 41058.699 - 41269.256: 99.6290% ( 7) 00:10:09.230 41269.256 - 41479.814: 99.6789% ( 7) 00:10:09.230 41479.814 - 41690.371: 99.7289% ( 7) 00:10:09.230 41690.371 - 41900.929: 99.7788% ( 7) 00:10:09.230 41900.929 - 42111.486: 99.8288% ( 7) 00:10:09.230 42111.486 - 42322.043: 99.8787% ( 7) 00:10:09.230 42322.043 - 42532.601: 99.9287% ( 7) 00:10:09.230 42532.601 - 42743.158: 99.9786% ( 7) 00:10:09.230 42743.158 - 42953.716: 100.0000% ( 3) 00:10:09.230 00:10:09.230 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:09.230 ============================================================================== 00:10:09.230 Range in us Cumulative IO count 00:10:09.230 6632.559 - 6658.879: 0.0071% ( 1) 00:10:09.230 6658.879 - 6685.198: 0.0213% ( 2) 00:10:09.230 6685.198 - 6711.518: 0.0355% ( 2) 00:10:09.230 6711.518 - 6737.838: 0.0497% ( 2) 00:10:09.230 6737.838 - 6790.477: 0.0994% ( 7) 00:10:09.230 6790.477 - 6843.116: 0.1562% ( 8) 00:10:09.230 6843.116 - 6895.756: 0.2486% ( 13) 00:10:09.230 6895.756 - 6948.395: 0.3267% ( 11) 00:10:09.230 6948.395 - 7001.035: 0.4261% ( 14) 00:10:09.230 7001.035 - 7053.674: 0.6108% ( 26) 00:10:09.230 7053.674 - 7106.313: 0.9233% ( 44) 00:10:09.230 7106.313 - 7158.953: 1.4062% ( 68) 00:10:09.230 7158.953 - 7211.592: 2.1875% ( 110) 00:10:09.230 7211.592 - 7264.231: 2.8977% ( 100) 00:10:09.230 7264.231 - 7316.871: 3.8849% ( 139) 00:10:09.230 7316.871 - 7369.510: 4.9006% ( 143) 00:10:09.230 7369.510 - 7422.149: 6.4418% ( 217) 00:10:09.230 7422.149 - 7474.789: 7.8480% ( 198) 00:10:09.230 7474.789 - 7527.428: 9.1903% ( 189) 00:10:09.230 7527.428 - 7580.067: 10.8807% ( 238) 00:10:09.230 7580.067 - 7632.707: 12.4290% ( 218) 00:10:09.230 7632.707 - 7685.346: 14.2472% ( 256) 00:10:09.230 7685.346 - 7737.986: 15.3409% ( 154) 00:10:09.230 7737.986 - 7790.625: 16.1932% ( 120) 00:10:09.230 7790.625 - 7843.264: 16.8466% ( 92) 00:10:09.230 7843.264 - 7895.904: 17.7983% ( 134) 00:10:09.230 7895.904 - 7948.543: 18.8565% ( 149) 00:10:09.230 7948.543 - 8001.182: 20.1136% ( 177) 00:10:09.230 8001.182 - 8053.822: 21.4347% ( 186) 00:10:09.230 8053.822 - 8106.461: 22.6634% ( 173) 00:10:09.230 8106.461 - 8159.100: 23.8920% ( 173) 00:10:09.230 8159.100 - 8211.740: 25.2060% ( 185) 00:10:09.230 8211.740 - 8264.379: 26.8537% ( 232) 00:10:09.230 8264.379 - 8317.018: 28.9915% ( 301) 00:10:09.230 8317.018 - 8369.658: 31.5909% ( 366) 00:10:09.230 8369.658 - 8422.297: 34.5597% ( 418) 00:10:09.230 8422.297 - 8474.937: 37.6918% ( 441) 00:10:09.230 8474.937 - 8527.576: 40.9233% ( 455) 00:10:09.230 8527.576 - 8580.215: 44.5170% ( 506) 00:10:09.230 8580.215 - 8632.855: 48.3736% ( 543) 00:10:09.230 8632.855 - 8685.494: 52.4432% ( 573) 00:10:09.230 8685.494 - 8738.133: 56.3068% ( 544) 00:10:09.230 8738.133 - 8790.773: 59.8935% ( 505) 00:10:09.230 8790.773 - 8843.412: 63.3665% ( 489) 00:10:09.230 8843.412 - 8896.051: 66.6406% ( 461) 00:10:09.230 8896.051 - 8948.691: 69.8366% ( 450) 00:10:09.230 8948.691 - 9001.330: 73.0185% ( 448) 00:10:09.230 9001.330 - 9053.969: 76.0298% ( 424) 00:10:09.230 9053.969 - 9106.609: 78.7784% ( 387) 00:10:09.230 9106.609 - 9159.248: 81.0582% ( 321) 00:10:09.230 9159.248 - 9211.888: 82.7912% ( 244) 00:10:09.230 9211.888 - 9264.527: 84.3040% ( 213) 00:10:09.230 9264.527 - 9317.166: 85.6605% ( 191) 00:10:09.230 9317.166 - 9369.806: 86.7969% ( 160) 00:10:09.230 9369.806 - 9422.445: 87.6847% ( 125) 00:10:09.230 9422.445 - 9475.084: 88.3381% ( 92) 00:10:09.230 9475.084 - 9527.724: 88.7713% ( 61) 00:10:09.230 9527.724 - 9580.363: 89.0909% ( 45) 00:10:09.230 9580.363 - 9633.002: 89.4034% ( 44) 00:10:09.230 9633.002 - 9685.642: 89.6378% ( 33) 00:10:09.230 9685.642 - 9738.281: 89.9148% ( 39) 00:10:09.230 9738.281 - 9790.920: 90.1207% ( 29) 00:10:09.230 9790.920 - 9843.560: 90.3409% ( 31) 00:10:09.230 9843.560 - 9896.199: 90.5682% ( 32) 00:10:09.230 9896.199 - 9948.839: 90.7955% ( 32) 00:10:09.230 9948.839 - 10001.478: 91.0298% ( 33) 00:10:09.230 10001.478 - 10054.117: 91.2500% ( 31) 00:10:09.230 10054.117 - 10106.757: 91.4560% ( 29) 00:10:09.230 10106.757 - 10159.396: 91.6548% ( 28) 00:10:09.230 10159.396 - 10212.035: 91.8466% ( 27) 00:10:09.230 10212.035 - 10264.675: 92.0028% ( 22) 00:10:09.230 10264.675 - 10317.314: 92.1804% ( 25) 00:10:09.230 10317.314 - 10369.953: 92.3509% ( 24) 00:10:09.230 10369.953 - 10422.593: 92.5213% ( 24) 00:10:09.230 10422.593 - 10475.232: 92.6705% ( 21) 00:10:09.230 10475.232 - 10527.871: 92.8054% ( 19) 00:10:09.230 10527.871 - 10580.511: 92.9048% ( 14) 00:10:09.230 10580.511 - 10633.150: 93.0256% ( 17) 00:10:09.230 10633.150 - 10685.790: 93.1392% ( 16) 00:10:09.230 10685.790 - 10738.429: 93.2244% ( 12) 00:10:09.230 10738.429 - 10791.068: 93.3097% ( 12) 00:10:09.230 10791.068 - 10843.708: 93.4304% ( 17) 00:10:09.230 10843.708 - 10896.347: 93.5298% ( 14) 00:10:09.230 10896.347 - 10948.986: 93.6364% ( 15) 00:10:09.230 10948.986 - 11001.626: 93.7287% ( 13) 00:10:09.230 11001.626 - 11054.265: 93.8139% ( 12) 00:10:09.230 11054.265 - 11106.904: 93.9205% ( 15) 00:10:09.230 11106.904 - 11159.544: 93.9986% ( 11) 00:10:09.230 11159.544 - 11212.183: 94.0696% ( 10) 00:10:09.230 11212.183 - 11264.822: 94.1335% ( 9) 00:10:09.230 11264.822 - 11317.462: 94.1903% ( 8) 00:10:09.230 11317.462 - 11370.101: 94.2401% ( 7) 00:10:09.230 11370.101 - 11422.741: 94.2756% ( 5) 00:10:09.230 11422.741 - 11475.380: 94.2969% ( 3) 00:10:09.230 11475.380 - 11528.019: 94.3253% ( 4) 00:10:09.230 11528.019 - 11580.659: 94.3537% ( 4) 00:10:09.230 11580.659 - 11633.298: 94.3821% ( 4) 00:10:09.231 11633.298 - 11685.937: 94.4176% ( 5) 00:10:09.231 11685.937 - 11738.577: 94.4886% ( 10) 00:10:09.231 11738.577 - 11791.216: 94.5455% ( 8) 00:10:09.231 11791.216 - 11843.855: 94.6236% ( 11) 00:10:09.231 11843.855 - 11896.495: 94.6733% ( 7) 00:10:09.231 11896.495 - 11949.134: 94.7372% ( 9) 00:10:09.231 11949.134 - 12001.773: 94.8011% ( 9) 00:10:09.231 12001.773 - 12054.413: 94.8651% ( 9) 00:10:09.231 12054.413 - 12107.052: 94.9148% ( 7) 00:10:09.231 12107.052 - 12159.692: 94.9787% ( 9) 00:10:09.231 12159.692 - 12212.331: 95.0497% ( 10) 00:10:09.231 12212.331 - 12264.970: 95.1065% ( 8) 00:10:09.231 12264.970 - 12317.610: 95.1705% ( 9) 00:10:09.231 12317.610 - 12370.249: 95.2344% ( 9) 00:10:09.231 12370.249 - 12422.888: 95.2912% ( 8) 00:10:09.231 12422.888 - 12475.528: 95.3622% ( 10) 00:10:09.231 12475.528 - 12528.167: 95.4048% ( 6) 00:10:09.231 12528.167 - 12580.806: 95.4616% ( 8) 00:10:09.231 12580.806 - 12633.446: 95.5114% ( 7) 00:10:09.231 12633.446 - 12686.085: 95.5540% ( 6) 00:10:09.231 12686.085 - 12738.724: 95.5966% ( 6) 00:10:09.231 12738.724 - 12791.364: 95.6534% ( 8) 00:10:09.231 12791.364 - 12844.003: 95.6960% ( 6) 00:10:09.231 12844.003 - 12896.643: 95.7315% ( 5) 00:10:09.231 12896.643 - 12949.282: 95.7599% ( 4) 00:10:09.231 12949.282 - 13001.921: 95.7741% ( 2) 00:10:09.231 13001.921 - 13054.561: 95.7884% ( 2) 00:10:09.231 13054.561 - 13107.200: 95.7955% ( 1) 00:10:09.231 13107.200 - 13159.839: 95.8097% ( 2) 00:10:09.231 13159.839 - 13212.479: 95.8239% ( 2) 00:10:09.231 13212.479 - 13265.118: 95.8381% ( 2) 00:10:09.231 13265.118 - 13317.757: 95.8523% ( 2) 00:10:09.231 13317.757 - 13370.397: 95.8736% ( 3) 00:10:09.231 13370.397 - 13423.036: 95.9233% ( 7) 00:10:09.231 13423.036 - 13475.676: 95.9801% ( 8) 00:10:09.231 13475.676 - 13580.954: 96.0866% ( 15) 00:10:09.231 13580.954 - 13686.233: 96.1719% ( 12) 00:10:09.231 13686.233 - 13791.512: 96.2784% ( 15) 00:10:09.231 13791.512 - 13896.790: 96.4062% ( 18) 00:10:09.231 13896.790 - 14002.069: 96.5554% ( 21) 00:10:09.231 14002.069 - 14107.348: 96.6832% ( 18) 00:10:09.231 14107.348 - 14212.627: 96.8040% ( 17) 00:10:09.231 14212.627 - 14317.905: 96.9389% ( 19) 00:10:09.231 14317.905 - 14423.184: 97.1094% ( 24) 00:10:09.231 14423.184 - 14528.463: 97.3011% ( 27) 00:10:09.231 14528.463 - 14633.741: 97.5000% ( 28) 00:10:09.231 14633.741 - 14739.020: 97.6918% ( 27) 00:10:09.231 14739.020 - 14844.299: 97.8338% ( 20) 00:10:09.231 14844.299 - 14949.578: 97.9616% ( 18) 00:10:09.231 14949.578 - 15054.856: 98.0611% ( 14) 00:10:09.231 15054.856 - 15160.135: 98.1392% ( 11) 00:10:09.231 15160.135 - 15265.414: 98.2173% ( 11) 00:10:09.231 15265.414 - 15370.692: 98.3026% ( 12) 00:10:09.231 15370.692 - 15475.971: 98.3878% ( 12) 00:10:09.231 15475.971 - 15581.250: 98.4588% ( 10) 00:10:09.231 15581.250 - 15686.529: 98.4943% ( 5) 00:10:09.231 15686.529 - 15791.807: 98.5227% ( 4) 00:10:09.231 15791.807 - 15897.086: 98.5440% ( 3) 00:10:09.231 15897.086 - 16002.365: 98.5724% ( 4) 00:10:09.231 16002.365 - 16107.643: 98.6009% ( 4) 00:10:09.231 16107.643 - 16212.922: 98.6222% ( 3) 00:10:09.231 16212.922 - 16318.201: 98.6364% ( 2) 00:10:09.231 17897.382 - 18002.660: 98.6719% ( 5) 00:10:09.231 18002.660 - 18107.939: 98.7145% ( 6) 00:10:09.231 18107.939 - 18213.218: 98.7571% ( 6) 00:10:09.231 18213.218 - 18318.496: 98.7997% ( 6) 00:10:09.231 18318.496 - 18423.775: 98.8423% ( 6) 00:10:09.231 18423.775 - 18529.054: 98.8920% ( 7) 00:10:09.231 18529.054 - 18634.333: 98.9347% ( 6) 00:10:09.231 18634.333 - 18739.611: 98.9773% ( 6) 00:10:09.231 18739.611 - 18844.890: 99.0199% ( 6) 00:10:09.231 18844.890 - 18950.169: 99.0625% ( 6) 00:10:09.231 18950.169 - 19055.447: 99.0909% ( 4) 00:10:09.231 27583.023 - 27793.581: 99.1335% ( 6) 00:10:09.231 27793.581 - 28004.138: 99.1903% ( 8) 00:10:09.231 28004.138 - 28214.696: 99.2401% ( 7) 00:10:09.231 28214.696 - 28425.253: 99.2898% ( 7) 00:10:09.231 28425.253 - 28635.810: 99.3466% ( 8) 00:10:09.231 28635.810 - 28846.368: 99.3892% ( 6) 00:10:09.231 28846.368 - 29056.925: 99.4176% ( 4) 00:10:09.231 29056.925 - 29267.483: 99.4744% ( 8) 00:10:09.231 29267.483 - 29478.040: 99.5312% ( 8) 00:10:09.231 29478.040 - 29688.598: 99.5455% ( 2) 00:10:09.231 33689.189 - 33899.746: 99.5739% ( 4) 00:10:09.231 33899.746 - 34110.304: 99.6236% ( 7) 00:10:09.231 34110.304 - 34320.861: 99.6804% ( 8) 00:10:09.231 34320.861 - 34531.418: 99.7372% ( 8) 00:10:09.231 34531.418 - 34741.976: 99.7798% ( 6) 00:10:09.231 34741.976 - 34952.533: 99.8366% ( 8) 00:10:09.231 34952.533 - 35163.091: 99.8793% ( 6) 00:10:09.231 35163.091 - 35373.648: 99.9361% ( 8) 00:10:09.231 35373.648 - 35584.206: 99.9858% ( 7) 00:10:09.231 35584.206 - 35794.763: 100.0000% ( 2) 00:10:09.231 00:10:09.231 08:30:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:10.614 Initializing NVMe Controllers 00:10:10.614 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:10.614 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:10.614 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:10.614 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:10.614 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:10.614 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:10.614 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:10.614 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:10.614 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:10.614 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:10.614 Initialization complete. Launching workers. 00:10:10.614 ======================================================== 00:10:10.614 Latency(us) 00:10:10.614 Device Information : IOPS MiB/s Average min max 00:10:10.614 PCIE (0000:00:10.0) NSID 1 from core 0: 12287.14 143.99 10444.26 8091.16 44180.16 00:10:10.614 PCIE (0000:00:11.0) NSID 1 from core 0: 12287.14 143.99 10428.42 8500.04 42117.86 00:10:10.614 PCIE (0000:00:13.0) NSID 1 from core 0: 12287.14 143.99 10411.23 8178.81 41466.13 00:10:10.614 PCIE (0000:00:12.0) NSID 1 from core 0: 12287.14 143.99 10394.42 8226.74 39327.38 00:10:10.614 PCIE (0000:00:12.0) NSID 2 from core 0: 12287.14 143.99 10378.66 8348.85 37837.20 00:10:10.614 PCIE (0000:00:12.0) NSID 3 from core 0: 12351.14 144.74 10308.33 8262.85 29439.38 00:10:10.614 ======================================================== 00:10:10.614 Total : 73786.83 864.69 10394.15 8091.16 44180.16 00:10:10.614 00:10:10.614 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:10.614 ================================================================================= 00:10:10.614 1.00000% : 8580.215us 00:10:10.614 10.00000% : 9001.330us 00:10:10.614 25.00000% : 9264.527us 00:10:10.614 50.00000% : 9527.724us 00:10:10.614 75.00000% : 9896.199us 00:10:10.614 90.00000% : 13423.036us 00:10:10.614 95.00000% : 15791.807us 00:10:10.614 98.00000% : 18423.775us 00:10:10.614 99.00000% : 33689.189us 00:10:10.614 99.50000% : 42532.601us 00:10:10.614 99.90000% : 44006.503us 00:10:10.614 99.99000% : 44217.060us 00:10:10.614 99.99900% : 44217.060us 00:10:10.614 99.99990% : 44217.060us 00:10:10.614 99.99999% : 44217.060us 00:10:10.614 00:10:10.614 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:10.614 ================================================================================= 00:10:10.614 1.00000% : 8738.133us 00:10:10.614 10.00000% : 9001.330us 00:10:10.614 25.00000% : 9264.527us 00:10:10.614 50.00000% : 9527.724us 00:10:10.614 75.00000% : 9843.560us 00:10:10.614 90.00000% : 13475.676us 00:10:10.614 95.00000% : 15897.086us 00:10:10.614 98.00000% : 18107.939us 00:10:10.614 99.00000% : 32004.729us 00:10:10.614 99.50000% : 40637.584us 00:10:10.614 99.90000% : 41900.929us 00:10:10.614 99.99000% : 42111.486us 00:10:10.614 99.99900% : 42322.043us 00:10:10.614 99.99990% : 42322.043us 00:10:10.614 99.99999% : 42322.043us 00:10:10.614 00:10:10.614 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:10.614 ================================================================================= 00:10:10.614 1.00000% : 8527.576us 00:10:10.614 10.00000% : 9001.330us 00:10:10.614 25.00000% : 9264.527us 00:10:10.614 50.00000% : 9527.724us 00:10:10.614 75.00000% : 9843.560us 00:10:10.614 90.00000% : 13159.839us 00:10:10.614 95.00000% : 15791.807us 00:10:10.614 98.00000% : 18739.611us 00:10:10.614 99.00000% : 31373.057us 00:10:10.614 99.50000% : 39795.354us 00:10:10.614 99.90000% : 41269.256us 00:10:10.614 99.99000% : 41479.814us 00:10:10.614 99.99900% : 41479.814us 00:10:10.614 99.99990% : 41479.814us 00:10:10.614 99.99999% : 41479.814us 00:10:10.614 00:10:10.614 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:10.614 ================================================================================= 00:10:10.614 1.00000% : 8632.855us 00:10:10.614 10.00000% : 9053.969us 00:10:10.614 25.00000% : 9317.166us 00:10:10.614 50.00000% : 9527.724us 00:10:10.614 75.00000% : 9790.920us 00:10:10.614 90.00000% : 12949.282us 00:10:10.614 95.00000% : 15475.971us 00:10:10.614 98.00000% : 18634.333us 00:10:10.615 99.00000% : 29688.598us 00:10:10.615 99.50000% : 37689.780us 00:10:10.615 99.90000% : 39163.682us 00:10:10.615 99.99000% : 39374.239us 00:10:10.615 99.99900% : 39374.239us 00:10:10.615 99.99990% : 39374.239us 00:10:10.615 99.99999% : 39374.239us 00:10:10.615 00:10:10.615 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:10.615 ================================================================================= 00:10:10.615 1.00000% : 8685.494us 00:10:10.615 10.00000% : 9001.330us 00:10:10.615 25.00000% : 9264.527us 00:10:10.615 50.00000% : 9527.724us 00:10:10.615 75.00000% : 9790.920us 00:10:10.615 90.00000% : 13317.757us 00:10:10.615 95.00000% : 15686.529us 00:10:10.615 98.00000% : 18318.496us 00:10:10.615 99.00000% : 28425.253us 00:10:10.615 99.50000% : 36215.878us 00:10:10.615 99.90000% : 37689.780us 00:10:10.615 99.99000% : 37900.337us 00:10:10.615 99.99900% : 37900.337us 00:10:10.615 99.99990% : 37900.337us 00:10:10.615 99.99999% : 37900.337us 00:10:10.615 00:10:10.615 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:10.615 ================================================================================= 00:10:10.615 1.00000% : 8632.855us 00:10:10.615 10.00000% : 9001.330us 00:10:10.615 25.00000% : 9317.166us 00:10:10.615 50.00000% : 9527.724us 00:10:10.615 75.00000% : 9790.920us 00:10:10.615 90.00000% : 13317.757us 00:10:10.615 95.00000% : 15581.250us 00:10:10.615 98.00000% : 18213.218us 00:10:10.615 99.00000% : 19897.677us 00:10:10.615 99.50000% : 27793.581us 00:10:10.615 99.90000% : 29267.483us 00:10:10.615 99.99000% : 29478.040us 00:10:10.615 99.99900% : 29478.040us 00:10:10.615 99.99990% : 29478.040us 00:10:10.615 99.99999% : 29478.040us 00:10:10.615 00:10:10.615 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:10.615 ============================================================================== 00:10:10.615 Range in us Cumulative IO count 00:10:10.615 8053.822 - 8106.461: 0.0163% ( 2) 00:10:10.615 8106.461 - 8159.100: 0.0570% ( 5) 00:10:10.615 8159.100 - 8211.740: 0.1139% ( 7) 00:10:10.615 8211.740 - 8264.379: 0.1465% ( 4) 00:10:10.615 8264.379 - 8317.018: 0.2604% ( 14) 00:10:10.615 8317.018 - 8369.658: 0.3906% ( 16) 00:10:10.615 8369.658 - 8422.297: 0.4476% ( 7) 00:10:10.615 8422.297 - 8474.937: 0.5615% ( 14) 00:10:10.615 8474.937 - 8527.576: 0.7080% ( 18) 00:10:10.615 8527.576 - 8580.215: 1.1719% ( 57) 00:10:10.615 8580.215 - 8632.855: 1.7415% ( 70) 00:10:10.615 8632.855 - 8685.494: 2.3031% ( 69) 00:10:10.615 8685.494 - 8738.133: 3.1820% ( 108) 00:10:10.615 8738.133 - 8790.773: 4.2318% ( 129) 00:10:10.615 8790.773 - 8843.412: 5.7129% ( 182) 00:10:10.615 8843.412 - 8896.051: 7.8695% ( 265) 00:10:10.615 8896.051 - 8948.691: 9.8063% ( 238) 00:10:10.615 8948.691 - 9001.330: 12.1663% ( 290) 00:10:10.615 9001.330 - 9053.969: 14.3717% ( 271) 00:10:10.615 9053.969 - 9106.609: 16.5690% ( 270) 00:10:10.615 9106.609 - 9159.248: 19.5964% ( 372) 00:10:10.615 9159.248 - 9211.888: 23.5758% ( 489) 00:10:10.615 9211.888 - 9264.527: 28.2552% ( 575) 00:10:10.615 9264.527 - 9317.166: 33.8460% ( 687) 00:10:10.615 9317.166 - 9369.806: 39.2171% ( 660) 00:10:10.615 9369.806 - 9422.445: 43.9860% ( 586) 00:10:10.615 9422.445 - 9475.084: 48.7793% ( 589) 00:10:10.615 9475.084 - 9527.724: 53.7760% ( 614) 00:10:10.615 9527.724 - 9580.363: 58.8786% ( 627) 00:10:10.615 9580.363 - 9633.002: 63.1510% ( 525) 00:10:10.615 9633.002 - 9685.642: 66.7887% ( 447) 00:10:10.615 9685.642 - 9738.281: 69.8079% ( 371) 00:10:10.615 9738.281 - 9790.920: 72.4365% ( 323) 00:10:10.615 9790.920 - 9843.560: 74.6908% ( 277) 00:10:10.615 9843.560 - 9896.199: 76.5462% ( 228) 00:10:10.615 9896.199 - 9948.839: 77.7181% ( 144) 00:10:10.615 9948.839 - 10001.478: 78.7028% ( 121) 00:10:10.615 10001.478 - 10054.117: 79.8340% ( 139) 00:10:10.615 10054.117 - 10106.757: 80.6559% ( 101) 00:10:10.615 10106.757 - 10159.396: 81.3558% ( 86) 00:10:10.615 10159.396 - 10212.035: 81.9010% ( 67) 00:10:10.615 10212.035 - 10264.675: 82.3568% ( 56) 00:10:10.615 10264.675 - 10317.314: 82.8451% ( 60) 00:10:10.615 10317.314 - 10369.953: 83.0892% ( 30) 00:10:10.615 10369.953 - 10422.593: 83.3577% ( 33) 00:10:10.615 10422.593 - 10475.232: 83.4635% ( 13) 00:10:10.615 10475.232 - 10527.871: 83.6100% ( 18) 00:10:10.615 10527.871 - 10580.511: 83.6833% ( 9) 00:10:10.615 10580.511 - 10633.150: 83.7728% ( 11) 00:10:10.615 10633.150 - 10685.790: 83.8623% ( 11) 00:10:10.615 10685.790 - 10738.429: 83.9600% ( 12) 00:10:10.615 10738.429 - 10791.068: 84.0088% ( 6) 00:10:10.615 10791.068 - 10843.708: 84.0332% ( 3) 00:10:10.615 10843.708 - 10896.347: 84.0820% ( 6) 00:10:10.615 10896.347 - 10948.986: 84.2204% ( 17) 00:10:10.615 10948.986 - 11001.626: 84.5215% ( 37) 00:10:10.615 11001.626 - 11054.265: 84.6924% ( 21) 00:10:10.615 11054.265 - 11106.904: 84.7412% ( 6) 00:10:10.615 11106.904 - 11159.544: 85.0179% ( 34) 00:10:10.615 11159.544 - 11212.183: 85.1562% ( 17) 00:10:10.615 11212.183 - 11264.822: 85.2539% ( 12) 00:10:10.615 11264.822 - 11317.462: 85.3760% ( 15) 00:10:10.615 11317.462 - 11370.101: 85.4574% ( 10) 00:10:10.615 11370.101 - 11422.741: 85.5713% ( 14) 00:10:10.615 11422.741 - 11475.380: 85.6852% ( 14) 00:10:10.615 11475.380 - 11528.019: 85.8154% ( 16) 00:10:10.615 11528.019 - 11580.659: 86.0107% ( 24) 00:10:10.615 11580.659 - 11633.298: 86.2956% ( 35) 00:10:10.615 11633.298 - 11685.937: 86.5397% ( 30) 00:10:10.615 11685.937 - 11738.577: 86.6862% ( 18) 00:10:10.615 11738.577 - 11791.216: 86.8490% ( 20) 00:10:10.615 11791.216 - 11843.855: 86.9222% ( 9) 00:10:10.615 11843.855 - 11896.495: 87.0117% ( 11) 00:10:10.615 11896.495 - 11949.134: 87.0768% ( 8) 00:10:10.615 11949.134 - 12001.773: 87.1338% ( 7) 00:10:10.615 12001.773 - 12054.413: 87.1826% ( 6) 00:10:10.615 12054.413 - 12107.052: 87.2152% ( 4) 00:10:10.615 12107.052 - 12159.692: 87.2477% ( 4) 00:10:10.615 12159.692 - 12212.331: 87.3128% ( 8) 00:10:10.615 12212.331 - 12264.970: 87.4023% ( 11) 00:10:10.615 12264.970 - 12317.610: 87.4919% ( 11) 00:10:10.615 12317.610 - 12370.249: 87.5895% ( 12) 00:10:10.615 12370.249 - 12422.888: 87.7523% ( 20) 00:10:10.615 12422.888 - 12475.528: 87.8825% ( 16) 00:10:10.615 12475.528 - 12528.167: 88.0452% ( 20) 00:10:10.615 12528.167 - 12580.806: 88.1917% ( 18) 00:10:10.615 12580.806 - 12633.446: 88.3057% ( 14) 00:10:10.615 12633.446 - 12686.085: 88.4277% ( 15) 00:10:10.615 12686.085 - 12738.724: 88.5498% ( 15) 00:10:10.615 12738.724 - 12791.364: 88.7370% ( 23) 00:10:10.615 12791.364 - 12844.003: 88.7939% ( 7) 00:10:10.615 12844.003 - 12896.643: 88.8672% ( 9) 00:10:10.615 12896.643 - 12949.282: 88.9974% ( 16) 00:10:10.615 12949.282 - 13001.921: 89.0951% ( 12) 00:10:10.615 13001.921 - 13054.561: 89.2008% ( 13) 00:10:10.615 13054.561 - 13107.200: 89.3066% ( 13) 00:10:10.615 13107.200 - 13159.839: 89.4368% ( 16) 00:10:10.615 13159.839 - 13212.479: 89.5508% ( 14) 00:10:10.615 13212.479 - 13265.118: 89.6322% ( 10) 00:10:10.615 13265.118 - 13317.757: 89.7868% ( 19) 00:10:10.615 13317.757 - 13370.397: 89.9251% ( 17) 00:10:10.615 13370.397 - 13423.036: 90.1449% ( 27) 00:10:10.615 13423.036 - 13475.676: 90.2995% ( 19) 00:10:10.615 13475.676 - 13580.954: 90.6820% ( 47) 00:10:10.615 13580.954 - 13686.233: 90.9017% ( 27) 00:10:10.615 13686.233 - 13791.512: 91.1214% ( 27) 00:10:10.615 13791.512 - 13896.790: 91.2679% ( 18) 00:10:10.615 13896.790 - 14002.069: 91.4225% ( 19) 00:10:10.615 14002.069 - 14107.348: 91.6016% ( 22) 00:10:10.615 14107.348 - 14212.627: 91.7725% ( 21) 00:10:10.615 14212.627 - 14317.905: 91.9515% ( 22) 00:10:10.615 14317.905 - 14423.184: 92.0329% ( 10) 00:10:10.615 14423.184 - 14528.463: 92.1875% ( 19) 00:10:10.615 14528.463 - 14633.741: 92.3828% ( 24) 00:10:10.615 14633.741 - 14739.020: 92.7490% ( 45) 00:10:10.615 14739.020 - 14844.299: 93.1722% ( 52) 00:10:10.615 14844.299 - 14949.578: 93.6523% ( 59) 00:10:10.615 14949.578 - 15054.856: 93.9290% ( 34) 00:10:10.615 15054.856 - 15160.135: 94.1488% ( 27) 00:10:10.615 15160.135 - 15265.414: 94.3359% ( 23) 00:10:10.615 15265.414 - 15370.692: 94.4906% ( 19) 00:10:10.615 15370.692 - 15475.971: 94.6859% ( 24) 00:10:10.615 15475.971 - 15581.250: 94.8161% ( 16) 00:10:10.615 15581.250 - 15686.529: 94.9707% ( 19) 00:10:10.615 15686.529 - 15791.807: 95.1172% ( 18) 00:10:10.615 15791.807 - 15897.086: 95.2393% ( 15) 00:10:10.615 15897.086 - 16002.365: 95.2962% ( 7) 00:10:10.615 16002.365 - 16107.643: 95.4346% ( 17) 00:10:10.615 16107.643 - 16212.922: 95.6706% ( 29) 00:10:10.615 16212.922 - 16318.201: 95.8740% ( 25) 00:10:10.615 16318.201 - 16423.480: 96.0205% ( 18) 00:10:10.615 16423.480 - 16528.758: 96.1100% ( 11) 00:10:10.615 16528.758 - 16634.037: 96.2891% ( 22) 00:10:10.615 16634.037 - 16739.316: 96.4111% ( 15) 00:10:10.615 16739.316 - 16844.594: 96.6146% ( 25) 00:10:10.615 16844.594 - 16949.873: 96.7529% ( 17) 00:10:10.615 16949.873 - 17055.152: 96.9076% ( 19) 00:10:10.615 17055.152 - 17160.431: 96.9808% ( 9) 00:10:10.615 17160.431 - 17265.709: 97.0866% ( 13) 00:10:10.615 17265.709 - 17370.988: 97.2087% ( 15) 00:10:10.615 17370.988 - 17476.267: 97.3877% ( 22) 00:10:10.615 17476.267 - 17581.545: 97.5423% ( 19) 00:10:10.615 17581.545 - 17686.824: 97.6156% ( 9) 00:10:10.615 17686.824 - 17792.103: 97.7214% ( 13) 00:10:10.615 17792.103 - 17897.382: 97.7946% ( 9) 00:10:10.615 17897.382 - 18002.660: 97.8841% ( 11) 00:10:10.615 18002.660 - 18107.939: 97.9329% ( 6) 00:10:10.615 18107.939 - 18213.218: 97.9574% ( 3) 00:10:10.615 18213.218 - 18318.496: 97.9980% ( 5) 00:10:10.615 18318.496 - 18423.775: 98.0225% ( 3) 00:10:10.615 18423.775 - 18529.054: 98.0632% ( 5) 00:10:10.616 18529.054 - 18634.333: 98.1201% ( 7) 00:10:10.616 18634.333 - 18739.611: 98.1771% ( 7) 00:10:10.616 18739.611 - 18844.890: 98.2096% ( 4) 00:10:10.616 18844.890 - 18950.169: 98.2422% ( 4) 00:10:10.616 18950.169 - 19055.447: 98.2747% ( 4) 00:10:10.616 19055.447 - 19160.726: 98.3073% ( 4) 00:10:10.616 19160.726 - 19266.005: 98.3724% ( 8) 00:10:10.616 19266.005 - 19371.284: 98.4375% ( 8) 00:10:10.616 19371.284 - 19476.562: 98.5189% ( 10) 00:10:10.616 19476.562 - 19581.841: 98.5840% ( 8) 00:10:10.616 19581.841 - 19687.120: 98.6328% ( 6) 00:10:10.616 20002.956 - 20108.235: 98.6654% ( 4) 00:10:10.616 20108.235 - 20213.513: 98.7061% ( 5) 00:10:10.616 20213.513 - 20318.792: 98.7549% ( 6) 00:10:10.616 20318.792 - 20424.071: 98.7956% ( 5) 00:10:10.616 20424.071 - 20529.349: 98.8281% ( 4) 00:10:10.616 20529.349 - 20634.628: 98.8688% ( 5) 00:10:10.616 20634.628 - 20739.907: 98.9176% ( 6) 00:10:10.616 20739.907 - 20845.186: 98.9502% ( 4) 00:10:10.616 20845.186 - 20950.464: 98.9583% ( 1) 00:10:10.616 33268.074 - 33478.631: 98.9746% ( 2) 00:10:10.616 33478.631 - 33689.189: 99.0479% ( 9) 00:10:10.616 33689.189 - 33899.746: 99.1455% ( 12) 00:10:10.616 33899.746 - 34110.304: 99.1862% ( 5) 00:10:10.616 34110.304 - 34320.861: 99.2513% ( 8) 00:10:10.616 34320.861 - 34531.418: 99.2920% ( 5) 00:10:10.616 34531.418 - 34741.976: 99.3408% ( 6) 00:10:10.616 34741.976 - 34952.533: 99.3978% ( 7) 00:10:10.616 34952.533 - 35163.091: 99.4466% ( 6) 00:10:10.616 35163.091 - 35373.648: 99.4792% ( 4) 00:10:10.616 42111.486 - 42322.043: 99.4954% ( 2) 00:10:10.616 42322.043 - 42532.601: 99.5605% ( 8) 00:10:10.616 42532.601 - 42743.158: 99.6012% ( 5) 00:10:10.616 42743.158 - 42953.716: 99.6663% ( 8) 00:10:10.616 42953.716 - 43164.273: 99.7314% ( 8) 00:10:10.616 43164.273 - 43374.831: 99.7884% ( 7) 00:10:10.616 43374.831 - 43585.388: 99.8454% ( 7) 00:10:10.616 43585.388 - 43795.945: 99.8942% ( 6) 00:10:10.616 43795.945 - 44006.503: 99.9593% ( 8) 00:10:10.616 44006.503 - 44217.060: 100.0000% ( 5) 00:10:10.616 00:10:10.616 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:10.616 ============================================================================== 00:10:10.616 Range in us Cumulative IO count 00:10:10.616 8474.937 - 8527.576: 0.0326% ( 4) 00:10:10.616 8527.576 - 8580.215: 0.1221% ( 11) 00:10:10.616 8580.215 - 8632.855: 0.3337% ( 26) 00:10:10.616 8632.855 - 8685.494: 0.7161% ( 47) 00:10:10.616 8685.494 - 8738.133: 1.3753% ( 81) 00:10:10.616 8738.133 - 8790.773: 2.8971% ( 187) 00:10:10.616 8790.773 - 8843.412: 4.3701% ( 181) 00:10:10.616 8843.412 - 8896.051: 6.1768% ( 222) 00:10:10.616 8896.051 - 8948.691: 8.5205% ( 288) 00:10:10.616 8948.691 - 9001.330: 11.1816% ( 327) 00:10:10.616 9001.330 - 9053.969: 13.7451% ( 315) 00:10:10.616 9053.969 - 9106.609: 16.1214% ( 292) 00:10:10.616 9106.609 - 9159.248: 19.3685% ( 399) 00:10:10.616 9159.248 - 9211.888: 22.6644% ( 405) 00:10:10.616 9211.888 - 9264.527: 26.9613% ( 528) 00:10:10.616 9264.527 - 9317.166: 32.2835% ( 654) 00:10:10.616 9317.166 - 9369.806: 38.0615% ( 710) 00:10:10.616 9369.806 - 9422.445: 43.5954% ( 680) 00:10:10.616 9422.445 - 9475.084: 49.5605% ( 733) 00:10:10.616 9475.084 - 9527.724: 55.2409% ( 698) 00:10:10.616 9527.724 - 9580.363: 60.1318% ( 601) 00:10:10.616 9580.363 - 9633.002: 64.5020% ( 537) 00:10:10.616 9633.002 - 9685.642: 68.3675% ( 475) 00:10:10.616 9685.642 - 9738.281: 71.3786% ( 370) 00:10:10.616 9738.281 - 9790.920: 73.8281% ( 301) 00:10:10.616 9790.920 - 9843.560: 75.7080% ( 231) 00:10:10.616 9843.560 - 9896.199: 77.3844% ( 206) 00:10:10.616 9896.199 - 9948.839: 78.6784% ( 159) 00:10:10.616 9948.839 - 10001.478: 79.6305% ( 117) 00:10:10.616 10001.478 - 10054.117: 80.4199% ( 97) 00:10:10.616 10054.117 - 10106.757: 81.0140% ( 73) 00:10:10.616 10106.757 - 10159.396: 81.6162% ( 74) 00:10:10.616 10159.396 - 10212.035: 82.1777% ( 69) 00:10:10.616 10212.035 - 10264.675: 82.6497% ( 58) 00:10:10.616 10264.675 - 10317.314: 83.1380% ( 60) 00:10:10.616 10317.314 - 10369.953: 83.4635% ( 40) 00:10:10.616 10369.953 - 10422.593: 83.8949% ( 53) 00:10:10.616 10422.593 - 10475.232: 84.0902% ( 24) 00:10:10.616 10475.232 - 10527.871: 84.1634% ( 9) 00:10:10.616 10527.871 - 10580.511: 84.2204% ( 7) 00:10:10.616 10580.511 - 10633.150: 84.2529% ( 4) 00:10:10.616 10633.150 - 10685.790: 84.2855% ( 4) 00:10:10.616 10685.790 - 10738.429: 84.3099% ( 3) 00:10:10.616 10738.429 - 10791.068: 84.3587% ( 6) 00:10:10.616 10791.068 - 10843.708: 84.4238% ( 8) 00:10:10.616 10843.708 - 10896.347: 84.5133% ( 11) 00:10:10.616 10896.347 - 10948.986: 84.6924% ( 22) 00:10:10.616 10948.986 - 11001.626: 84.9528% ( 32) 00:10:10.616 11001.626 - 11054.265: 85.1725% ( 27) 00:10:10.616 11054.265 - 11106.904: 85.4818% ( 38) 00:10:10.616 11106.904 - 11159.544: 85.6852% ( 25) 00:10:10.616 11159.544 - 11212.183: 85.8805% ( 24) 00:10:10.616 11212.183 - 11264.822: 86.0026% ( 15) 00:10:10.616 11264.822 - 11317.462: 86.1003% ( 12) 00:10:10.616 11317.462 - 11370.101: 86.2142% ( 14) 00:10:10.616 11370.101 - 11422.741: 86.3444% ( 16) 00:10:10.616 11422.741 - 11475.380: 86.4583% ( 14) 00:10:10.616 11475.380 - 11528.019: 86.5804% ( 15) 00:10:10.616 11528.019 - 11580.659: 86.6781% ( 12) 00:10:10.616 11580.659 - 11633.298: 86.7839% ( 13) 00:10:10.616 11633.298 - 11685.937: 86.8896% ( 13) 00:10:10.616 11685.937 - 11738.577: 86.9629% ( 9) 00:10:10.616 11738.577 - 11791.216: 87.0280% ( 8) 00:10:10.616 11791.216 - 11843.855: 87.1094% ( 10) 00:10:10.616 11843.855 - 11896.495: 87.1908% ( 10) 00:10:10.616 11896.495 - 11949.134: 87.2640% ( 9) 00:10:10.616 11949.134 - 12001.773: 87.2965% ( 4) 00:10:10.616 12001.773 - 12054.413: 87.4186% ( 15) 00:10:10.616 12054.413 - 12107.052: 87.4919% ( 9) 00:10:10.616 12107.052 - 12159.692: 87.5814% ( 11) 00:10:10.616 12159.692 - 12212.331: 87.6872% ( 13) 00:10:10.616 12212.331 - 12264.970: 87.7930% ( 13) 00:10:10.616 12264.970 - 12317.610: 87.8499% ( 7) 00:10:10.616 12317.610 - 12370.249: 87.8825% ( 4) 00:10:10.616 12370.249 - 12422.888: 87.9069% ( 3) 00:10:10.616 12422.888 - 12475.528: 87.9883% ( 10) 00:10:10.616 12475.528 - 12528.167: 88.0371% ( 6) 00:10:10.616 12528.167 - 12580.806: 88.0778% ( 5) 00:10:10.616 12580.806 - 12633.446: 88.1185% ( 5) 00:10:10.616 12633.446 - 12686.085: 88.2243% ( 13) 00:10:10.616 12686.085 - 12738.724: 88.3789% ( 19) 00:10:10.616 12738.724 - 12791.364: 88.5010% ( 15) 00:10:10.616 12791.364 - 12844.003: 88.6475% ( 18) 00:10:10.616 12844.003 - 12896.643: 88.7451% ( 12) 00:10:10.616 12896.643 - 12949.282: 88.7939% ( 6) 00:10:10.616 12949.282 - 13001.921: 88.8835% ( 11) 00:10:10.616 13001.921 - 13054.561: 88.9567% ( 9) 00:10:10.616 13054.561 - 13107.200: 89.0381% ( 10) 00:10:10.616 13107.200 - 13159.839: 89.1520% ( 14) 00:10:10.616 13159.839 - 13212.479: 89.3066% ( 19) 00:10:10.616 13212.479 - 13265.118: 89.4857% ( 22) 00:10:10.616 13265.118 - 13317.757: 89.6403% ( 19) 00:10:10.616 13317.757 - 13370.397: 89.7949% ( 19) 00:10:10.616 13370.397 - 13423.036: 89.9170% ( 15) 00:10:10.616 13423.036 - 13475.676: 90.0472% ( 16) 00:10:10.616 13475.676 - 13580.954: 90.2995% ( 31) 00:10:10.616 13580.954 - 13686.233: 90.6901% ( 48) 00:10:10.616 13686.233 - 13791.512: 91.1133% ( 52) 00:10:10.616 13791.512 - 13896.790: 91.3737% ( 32) 00:10:10.616 13896.790 - 14002.069: 91.5283% ( 19) 00:10:10.616 14002.069 - 14107.348: 91.6260% ( 12) 00:10:10.616 14107.348 - 14212.627: 91.7480% ( 15) 00:10:10.616 14212.627 - 14317.905: 91.8783% ( 16) 00:10:10.616 14317.905 - 14423.184: 92.0166% ( 17) 00:10:10.616 14423.184 - 14528.463: 92.2445% ( 28) 00:10:10.616 14528.463 - 14633.741: 92.5049% ( 32) 00:10:10.616 14633.741 - 14739.020: 92.8548% ( 43) 00:10:10.616 14739.020 - 14844.299: 92.9525% ( 12) 00:10:10.616 14844.299 - 14949.578: 93.0257% ( 9) 00:10:10.616 14949.578 - 15054.856: 93.0664% ( 5) 00:10:10.616 15054.856 - 15160.135: 93.1803% ( 14) 00:10:10.616 15160.135 - 15265.414: 93.4652% ( 35) 00:10:10.616 15265.414 - 15370.692: 93.7337% ( 33) 00:10:10.616 15370.692 - 15475.971: 93.8639% ( 16) 00:10:10.616 15475.971 - 15581.250: 94.1732% ( 38) 00:10:10.616 15581.250 - 15686.529: 94.3604% ( 23) 00:10:10.616 15686.529 - 15791.807: 94.6615% ( 37) 00:10:10.616 15791.807 - 15897.086: 95.0277% ( 45) 00:10:10.616 15897.086 - 16002.365: 95.3125% ( 35) 00:10:10.616 16002.365 - 16107.643: 95.5648% ( 31) 00:10:10.616 16107.643 - 16212.922: 95.7113% ( 18) 00:10:10.616 16212.922 - 16318.201: 95.8740% ( 20) 00:10:10.616 16318.201 - 16423.480: 96.0042% ( 16) 00:10:10.616 16423.480 - 16528.758: 96.1670% ( 20) 00:10:10.616 16528.758 - 16634.037: 96.2728% ( 13) 00:10:10.616 16634.037 - 16739.316: 96.5088% ( 29) 00:10:10.616 16739.316 - 16844.594: 96.7367% ( 28) 00:10:10.616 16844.594 - 16949.873: 96.9645% ( 28) 00:10:10.616 16949.873 - 17055.152: 97.1110% ( 18) 00:10:10.616 17055.152 - 17160.431: 97.2738% ( 20) 00:10:10.616 17160.431 - 17265.709: 97.3389% ( 8) 00:10:10.616 17265.709 - 17370.988: 97.3877% ( 6) 00:10:10.616 17370.988 - 17476.267: 97.4040% ( 2) 00:10:10.616 17581.545 - 17686.824: 97.4202% ( 2) 00:10:10.616 17686.824 - 17792.103: 97.4691% ( 6) 00:10:10.616 17792.103 - 17897.382: 97.7051% ( 29) 00:10:10.616 17897.382 - 18002.660: 97.9248% ( 27) 00:10:10.616 18002.660 - 18107.939: 98.0469% ( 15) 00:10:10.616 18107.939 - 18213.218: 98.0957% ( 6) 00:10:10.616 18213.218 - 18318.496: 98.1283% ( 4) 00:10:10.616 18318.496 - 18423.775: 98.1445% ( 2) 00:10:10.616 18423.775 - 18529.054: 98.1608% ( 2) 00:10:10.616 18529.054 - 18634.333: 98.1771% ( 2) 00:10:10.616 18634.333 - 18739.611: 98.2015% ( 3) 00:10:10.616 18739.611 - 18844.890: 98.2259% ( 3) 00:10:10.617 18844.890 - 18950.169: 98.2503% ( 3) 00:10:10.617 18950.169 - 19055.447: 98.2747% ( 3) 00:10:10.617 19055.447 - 19160.726: 98.2992% ( 3) 00:10:10.617 19160.726 - 19266.005: 98.3236% ( 3) 00:10:10.617 19266.005 - 19371.284: 98.3480% ( 3) 00:10:10.617 19371.284 - 19476.562: 98.3805% ( 4) 00:10:10.617 19476.562 - 19581.841: 98.3968% ( 2) 00:10:10.617 19581.841 - 19687.120: 98.4131% ( 2) 00:10:10.617 19687.120 - 19792.398: 98.5107% ( 12) 00:10:10.617 19792.398 - 19897.677: 98.6003% ( 11) 00:10:10.617 19897.677 - 20002.956: 98.6979% ( 12) 00:10:10.617 20002.956 - 20108.235: 98.7223% ( 3) 00:10:10.617 20108.235 - 20213.513: 98.7549% ( 4) 00:10:10.617 20213.513 - 20318.792: 98.8037% ( 6) 00:10:10.617 20318.792 - 20424.071: 98.8525% ( 6) 00:10:10.617 20424.071 - 20529.349: 98.9095% ( 7) 00:10:10.617 20529.349 - 20634.628: 98.9583% ( 6) 00:10:10.617 31794.172 - 32004.729: 99.0234% ( 8) 00:10:10.617 32004.729 - 32215.287: 99.0804% ( 7) 00:10:10.617 32215.287 - 32425.844: 99.1455% ( 8) 00:10:10.617 32425.844 - 32636.402: 99.2025% ( 7) 00:10:10.617 32636.402 - 32846.959: 99.2676% ( 8) 00:10:10.617 32846.959 - 33057.516: 99.3245% ( 7) 00:10:10.617 33057.516 - 33268.074: 99.3815% ( 7) 00:10:10.617 33268.074 - 33478.631: 99.4385% ( 7) 00:10:10.617 33478.631 - 33689.189: 99.4792% ( 5) 00:10:10.617 40427.027 - 40637.584: 99.5280% ( 6) 00:10:10.617 40637.584 - 40848.141: 99.5931% ( 8) 00:10:10.617 40848.141 - 41058.699: 99.6582% ( 8) 00:10:10.617 41058.699 - 41269.256: 99.7233% ( 8) 00:10:10.617 41269.256 - 41479.814: 99.7884% ( 8) 00:10:10.617 41479.814 - 41690.371: 99.8617% ( 9) 00:10:10.617 41690.371 - 41900.929: 99.9268% ( 8) 00:10:10.617 41900.929 - 42111.486: 99.9919% ( 8) 00:10:10.617 42111.486 - 42322.043: 100.0000% ( 1) 00:10:10.617 00:10:10.617 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:10.617 ============================================================================== 00:10:10.617 Range in us Cumulative IO count 00:10:10.617 8159.100 - 8211.740: 0.0244% ( 3) 00:10:10.617 8211.740 - 8264.379: 0.0570% ( 4) 00:10:10.617 8264.379 - 8317.018: 0.1546% ( 12) 00:10:10.617 8317.018 - 8369.658: 0.2848% ( 16) 00:10:10.617 8369.658 - 8422.297: 0.5290% ( 30) 00:10:10.617 8422.297 - 8474.937: 0.7894% ( 32) 00:10:10.617 8474.937 - 8527.576: 1.1068% ( 39) 00:10:10.617 8527.576 - 8580.215: 1.4160% ( 38) 00:10:10.617 8580.215 - 8632.855: 1.8311% ( 51) 00:10:10.617 8632.855 - 8685.494: 2.2786% ( 55) 00:10:10.617 8685.494 - 8738.133: 2.9867% ( 87) 00:10:10.617 8738.133 - 8790.773: 3.9144% ( 114) 00:10:10.617 8790.773 - 8843.412: 5.2816% ( 168) 00:10:10.617 8843.412 - 8896.051: 7.2673% ( 244) 00:10:10.617 8896.051 - 8948.691: 9.5052% ( 275) 00:10:10.617 8948.691 - 9001.330: 11.7106% ( 271) 00:10:10.617 9001.330 - 9053.969: 14.0951% ( 293) 00:10:10.617 9053.969 - 9106.609: 16.7887% ( 331) 00:10:10.617 9106.609 - 9159.248: 20.1904% ( 418) 00:10:10.617 9159.248 - 9211.888: 23.5677% ( 415) 00:10:10.617 9211.888 - 9264.527: 27.6449% ( 501) 00:10:10.617 9264.527 - 9317.166: 31.8685% ( 519) 00:10:10.617 9317.166 - 9369.806: 37.0117% ( 632) 00:10:10.617 9369.806 - 9422.445: 42.1712% ( 634) 00:10:10.617 9422.445 - 9475.084: 47.6888% ( 678) 00:10:10.617 9475.084 - 9527.724: 53.2227% ( 680) 00:10:10.617 9527.724 - 9580.363: 58.1787% ( 609) 00:10:10.617 9580.363 - 9633.002: 62.7686% ( 564) 00:10:10.617 9633.002 - 9685.642: 66.6992% ( 483) 00:10:10.617 9685.642 - 9738.281: 70.0195% ( 408) 00:10:10.617 9738.281 - 9790.920: 72.8923% ( 353) 00:10:10.617 9790.920 - 9843.560: 75.3662% ( 304) 00:10:10.617 9843.560 - 9896.199: 76.9368% ( 193) 00:10:10.617 9896.199 - 9948.839: 78.1576% ( 150) 00:10:10.617 9948.839 - 10001.478: 78.8818% ( 89) 00:10:10.617 10001.478 - 10054.117: 79.4515% ( 70) 00:10:10.617 10054.117 - 10106.757: 79.9235% ( 58) 00:10:10.617 10106.757 - 10159.396: 80.3548% ( 53) 00:10:10.617 10159.396 - 10212.035: 80.7210% ( 45) 00:10:10.617 10212.035 - 10264.675: 81.2174% ( 61) 00:10:10.617 10264.675 - 10317.314: 81.5023% ( 35) 00:10:10.617 10317.314 - 10369.953: 81.8441% ( 42) 00:10:10.617 10369.953 - 10422.593: 82.2998% ( 56) 00:10:10.617 10422.593 - 10475.232: 82.6497% ( 43) 00:10:10.617 10475.232 - 10527.871: 82.9102% ( 32) 00:10:10.617 10527.871 - 10580.511: 83.1624% ( 31) 00:10:10.617 10580.511 - 10633.150: 83.3984% ( 29) 00:10:10.617 10633.150 - 10685.790: 83.5938% ( 24) 00:10:10.617 10685.790 - 10738.429: 83.7077% ( 14) 00:10:10.617 10738.429 - 10791.068: 83.8053% ( 12) 00:10:10.617 10791.068 - 10843.708: 83.9030% ( 12) 00:10:10.617 10843.708 - 10896.347: 83.9681% ( 8) 00:10:10.617 10896.347 - 10948.986: 84.1146% ( 18) 00:10:10.617 10948.986 - 11001.626: 84.3099% ( 24) 00:10:10.617 11001.626 - 11054.265: 84.4401% ( 16) 00:10:10.617 11054.265 - 11106.904: 84.5459% ( 13) 00:10:10.617 11106.904 - 11159.544: 84.7249% ( 22) 00:10:10.617 11159.544 - 11212.183: 85.0016% ( 34) 00:10:10.617 11212.183 - 11264.822: 85.2376% ( 29) 00:10:10.617 11264.822 - 11317.462: 85.4980% ( 32) 00:10:10.617 11317.462 - 11370.101: 85.6608% ( 20) 00:10:10.617 11370.101 - 11422.741: 85.8887% ( 28) 00:10:10.617 11422.741 - 11475.380: 86.1816% ( 36) 00:10:10.617 11475.380 - 11528.019: 86.3932% ( 26) 00:10:10.617 11528.019 - 11580.659: 86.6048% ( 26) 00:10:10.617 11580.659 - 11633.298: 86.7106% ( 13) 00:10:10.617 11633.298 - 11685.937: 86.8245% ( 14) 00:10:10.617 11685.937 - 11738.577: 86.8978% ( 9) 00:10:10.617 11738.577 - 11791.216: 86.9792% ( 10) 00:10:10.617 11791.216 - 11843.855: 87.1012% ( 15) 00:10:10.617 11843.855 - 11896.495: 87.2233% ( 15) 00:10:10.617 11896.495 - 11949.134: 87.3128% ( 11) 00:10:10.617 11949.134 - 12001.773: 87.3942% ( 10) 00:10:10.617 12001.773 - 12054.413: 87.4674% ( 9) 00:10:10.617 12054.413 - 12107.052: 87.5570% ( 11) 00:10:10.617 12107.052 - 12159.692: 87.6872% ( 16) 00:10:10.617 12159.692 - 12212.331: 87.8743% ( 23) 00:10:10.617 12212.331 - 12264.970: 88.1185% ( 30) 00:10:10.617 12264.970 - 12317.610: 88.3382% ( 27) 00:10:10.617 12317.610 - 12370.249: 88.5824% ( 30) 00:10:10.617 12370.249 - 12422.888: 88.8021% ( 27) 00:10:10.617 12422.888 - 12475.528: 88.8753% ( 9) 00:10:10.617 12475.528 - 12528.167: 88.9404% ( 8) 00:10:10.617 12528.167 - 12580.806: 89.0055% ( 8) 00:10:10.617 12580.806 - 12633.446: 89.0788% ( 9) 00:10:10.617 12633.446 - 12686.085: 89.1846% ( 13) 00:10:10.617 12686.085 - 12738.724: 89.2985% ( 14) 00:10:10.617 12738.724 - 12791.364: 89.4043% ( 13) 00:10:10.617 12791.364 - 12844.003: 89.4938% ( 11) 00:10:10.617 12844.003 - 12896.643: 89.5996% ( 13) 00:10:10.617 12896.643 - 12949.282: 89.6973% ( 12) 00:10:10.617 12949.282 - 13001.921: 89.7786% ( 10) 00:10:10.617 13001.921 - 13054.561: 89.8763% ( 12) 00:10:10.617 13054.561 - 13107.200: 89.9740% ( 12) 00:10:10.617 13107.200 - 13159.839: 90.0960% ( 15) 00:10:10.617 13159.839 - 13212.479: 90.2344% ( 17) 00:10:10.617 13212.479 - 13265.118: 90.3564% ( 15) 00:10:10.617 13265.118 - 13317.757: 90.4948% ( 17) 00:10:10.617 13317.757 - 13370.397: 90.6820% ( 23) 00:10:10.617 13370.397 - 13423.036: 90.8366% ( 19) 00:10:10.617 13423.036 - 13475.676: 90.8854% ( 6) 00:10:10.617 13475.676 - 13580.954: 91.0075% ( 15) 00:10:10.617 13580.954 - 13686.233: 91.2516% ( 30) 00:10:10.617 13686.233 - 13791.512: 91.5283% ( 34) 00:10:10.617 13791.512 - 13896.790: 91.8376% ( 38) 00:10:10.617 13896.790 - 14002.069: 92.0573% ( 27) 00:10:10.617 14002.069 - 14107.348: 92.2363% ( 22) 00:10:10.617 14107.348 - 14212.627: 92.6188% ( 47) 00:10:10.617 14212.627 - 14317.905: 93.0745% ( 56) 00:10:10.617 14317.905 - 14423.184: 93.3350% ( 32) 00:10:10.617 14423.184 - 14528.463: 93.5221% ( 23) 00:10:10.617 14528.463 - 14633.741: 93.6930% ( 21) 00:10:10.617 14633.741 - 14739.020: 93.8395% ( 18) 00:10:10.617 14739.020 - 14844.299: 93.9941% ( 19) 00:10:10.617 14844.299 - 14949.578: 94.0755% ( 10) 00:10:10.617 14949.578 - 15054.856: 94.1813% ( 13) 00:10:10.617 15054.856 - 15160.135: 94.3197% ( 17) 00:10:10.617 15160.135 - 15265.414: 94.3604% ( 5) 00:10:10.617 15265.414 - 15370.692: 94.4499% ( 11) 00:10:10.617 15370.692 - 15475.971: 94.5312% ( 10) 00:10:10.617 15475.971 - 15581.250: 94.6126% ( 10) 00:10:10.617 15581.250 - 15686.529: 94.8161% ( 25) 00:10:10.617 15686.529 - 15791.807: 95.0602% ( 30) 00:10:10.617 15791.807 - 15897.086: 95.1579% ( 12) 00:10:10.617 15897.086 - 16002.365: 95.1904% ( 4) 00:10:10.617 16002.365 - 16107.643: 95.2148% ( 3) 00:10:10.617 16107.643 - 16212.922: 95.2474% ( 4) 00:10:10.617 16212.922 - 16318.201: 95.2718% ( 3) 00:10:10.617 16318.201 - 16423.480: 95.3125% ( 5) 00:10:10.617 16423.480 - 16528.758: 95.4102% ( 12) 00:10:10.617 16528.758 - 16634.037: 95.6950% ( 35) 00:10:10.617 16634.037 - 16739.316: 95.8415% ( 18) 00:10:10.617 16739.316 - 16844.594: 96.0612% ( 27) 00:10:10.617 16844.594 - 16949.873: 96.2728% ( 26) 00:10:10.617 16949.873 - 17055.152: 96.5495% ( 34) 00:10:10.617 17055.152 - 17160.431: 96.8669% ( 39) 00:10:10.617 17160.431 - 17265.709: 96.9808% ( 14) 00:10:10.617 17265.709 - 17370.988: 97.1029% ( 15) 00:10:10.617 17370.988 - 17476.267: 97.2005% ( 12) 00:10:10.617 17476.267 - 17581.545: 97.2656% ( 8) 00:10:10.617 17581.545 - 17686.824: 97.3307% ( 8) 00:10:10.617 17686.824 - 17792.103: 97.4040% ( 9) 00:10:10.617 17792.103 - 17897.382: 97.4691% ( 8) 00:10:10.617 17897.382 - 18002.660: 97.5342% ( 8) 00:10:10.617 18002.660 - 18107.939: 97.5993% ( 8) 00:10:10.617 18107.939 - 18213.218: 97.6644% ( 8) 00:10:10.617 18213.218 - 18318.496: 97.7295% ( 8) 00:10:10.617 18318.496 - 18423.775: 97.8027% ( 9) 00:10:10.617 18423.775 - 18529.054: 97.8841% ( 10) 00:10:10.618 18529.054 - 18634.333: 97.9655% ( 10) 00:10:10.618 18634.333 - 18739.611: 98.0794% ( 14) 00:10:10.618 18739.611 - 18844.890: 98.2829% ( 25) 00:10:10.618 18844.890 - 18950.169: 98.4375% ( 19) 00:10:10.618 18950.169 - 19055.447: 98.5433% ( 13) 00:10:10.618 19055.447 - 19160.726: 98.6165% ( 9) 00:10:10.618 19160.726 - 19266.005: 98.6654% ( 6) 00:10:10.618 19266.005 - 19371.284: 98.7142% ( 6) 00:10:10.618 19371.284 - 19476.562: 98.7630% ( 6) 00:10:10.618 19476.562 - 19581.841: 98.8200% ( 7) 00:10:10.618 19581.841 - 19687.120: 98.8688% ( 6) 00:10:10.618 19687.120 - 19792.398: 98.9176% ( 6) 00:10:10.618 19792.398 - 19897.677: 98.9583% ( 5) 00:10:10.618 31162.500 - 31373.057: 99.0234% ( 8) 00:10:10.618 31373.057 - 31583.614: 99.0723% ( 6) 00:10:10.618 31583.614 - 31794.172: 99.1374% ( 8) 00:10:10.618 31794.172 - 32004.729: 99.1943% ( 7) 00:10:10.618 32004.729 - 32215.287: 99.2513% ( 7) 00:10:10.618 32215.287 - 32425.844: 99.3164% ( 8) 00:10:10.618 32425.844 - 32636.402: 99.3734% ( 7) 00:10:10.618 32636.402 - 32846.959: 99.4385% ( 8) 00:10:10.618 32846.959 - 33057.516: 99.4792% ( 5) 00:10:10.618 39584.797 - 39795.354: 99.5117% ( 4) 00:10:10.618 39795.354 - 40005.912: 99.5768% ( 8) 00:10:10.618 40005.912 - 40216.469: 99.6257% ( 6) 00:10:10.618 40216.469 - 40427.027: 99.6826% ( 7) 00:10:10.618 40427.027 - 40637.584: 99.7477% ( 8) 00:10:10.618 40637.584 - 40848.141: 99.8128% ( 8) 00:10:10.618 40848.141 - 41058.699: 99.8698% ( 7) 00:10:10.618 41058.699 - 41269.256: 99.9349% ( 8) 00:10:10.618 41269.256 - 41479.814: 100.0000% ( 8) 00:10:10.618 00:10:10.618 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:10.618 ============================================================================== 00:10:10.618 Range in us Cumulative IO count 00:10:10.618 8211.740 - 8264.379: 0.0081% ( 1) 00:10:10.618 8264.379 - 8317.018: 0.0163% ( 1) 00:10:10.618 8369.658 - 8422.297: 0.0488% ( 4) 00:10:10.618 8422.297 - 8474.937: 0.2035% ( 19) 00:10:10.618 8474.937 - 8527.576: 0.4639% ( 32) 00:10:10.618 8527.576 - 8580.215: 0.9521% ( 60) 00:10:10.618 8580.215 - 8632.855: 1.6683% ( 88) 00:10:10.618 8632.855 - 8685.494: 2.3112% ( 79) 00:10:10.618 8685.494 - 8738.133: 3.0355% ( 89) 00:10:10.618 8738.133 - 8790.773: 4.0690% ( 127) 00:10:10.618 8790.773 - 8843.412: 5.3467% ( 157) 00:10:10.618 8843.412 - 8896.051: 6.6895% ( 165) 00:10:10.618 8896.051 - 8948.691: 8.0729% ( 170) 00:10:10.618 8948.691 - 9001.330: 9.9691% ( 233) 00:10:10.618 9001.330 - 9053.969: 11.9303% ( 241) 00:10:10.618 9053.969 - 9106.609: 13.9893% ( 253) 00:10:10.618 9106.609 - 9159.248: 16.8945% ( 357) 00:10:10.618 9159.248 - 9211.888: 19.9788% ( 379) 00:10:10.618 9211.888 - 9264.527: 24.2025% ( 519) 00:10:10.618 9264.527 - 9317.166: 29.2887% ( 625) 00:10:10.618 9317.166 - 9369.806: 35.0830% ( 712) 00:10:10.618 9369.806 - 9422.445: 41.4307% ( 780) 00:10:10.618 9422.445 - 9475.084: 47.8434% ( 788) 00:10:10.618 9475.084 - 9527.724: 53.9469% ( 750) 00:10:10.618 9527.724 - 9580.363: 59.9528% ( 738) 00:10:10.618 9580.363 - 9633.002: 64.8031% ( 596) 00:10:10.618 9633.002 - 9685.642: 69.0837% ( 526) 00:10:10.618 9685.642 - 9738.281: 72.4040% ( 408) 00:10:10.618 9738.281 - 9790.920: 75.1383% ( 336) 00:10:10.618 9790.920 - 9843.560: 77.2624% ( 261) 00:10:10.618 9843.560 - 9896.199: 78.8249% ( 192) 00:10:10.618 9896.199 - 9948.839: 79.9561% ( 139) 00:10:10.618 9948.839 - 10001.478: 80.7210% ( 94) 00:10:10.618 10001.478 - 10054.117: 81.1605% ( 54) 00:10:10.618 10054.117 - 10106.757: 81.4697% ( 38) 00:10:10.618 10106.757 - 10159.396: 81.7464% ( 34) 00:10:10.618 10159.396 - 10212.035: 81.9417% ( 24) 00:10:10.618 10212.035 - 10264.675: 82.1615% ( 27) 00:10:10.618 10264.675 - 10317.314: 82.3649% ( 25) 00:10:10.618 10317.314 - 10369.953: 82.4707% ( 13) 00:10:10.618 10369.953 - 10422.593: 82.5521% ( 10) 00:10:10.618 10422.593 - 10475.232: 82.6335% ( 10) 00:10:10.618 10475.232 - 10527.871: 82.6742% ( 5) 00:10:10.618 10527.871 - 10580.511: 82.7148% ( 5) 00:10:10.618 10580.511 - 10633.150: 82.7799% ( 8) 00:10:10.618 10633.150 - 10685.790: 82.9102% ( 16) 00:10:10.618 10685.790 - 10738.429: 83.0892% ( 22) 00:10:10.618 10738.429 - 10791.068: 83.3089% ( 27) 00:10:10.618 10791.068 - 10843.708: 83.6995% ( 48) 00:10:10.618 10843.708 - 10896.347: 83.8460% ( 18) 00:10:10.618 10896.347 - 10948.986: 84.0007% ( 19) 00:10:10.618 10948.986 - 11001.626: 84.2611% ( 32) 00:10:10.618 11001.626 - 11054.265: 84.4157% ( 19) 00:10:10.618 11054.265 - 11106.904: 84.5459% ( 16) 00:10:10.618 11106.904 - 11159.544: 84.7493% ( 25) 00:10:10.618 11159.544 - 11212.183: 84.8551% ( 13) 00:10:10.618 11212.183 - 11264.822: 84.9202% ( 8) 00:10:10.618 11264.822 - 11317.462: 84.9854% ( 8) 00:10:10.618 11317.462 - 11370.101: 85.0749% ( 11) 00:10:10.618 11370.101 - 11422.741: 85.1969% ( 15) 00:10:10.618 11422.741 - 11475.380: 85.3271% ( 16) 00:10:10.618 11475.380 - 11528.019: 85.4655% ( 17) 00:10:10.618 11528.019 - 11580.659: 85.6120% ( 18) 00:10:10.618 11580.659 - 11633.298: 85.7666% ( 19) 00:10:10.618 11633.298 - 11685.937: 85.9049% ( 17) 00:10:10.618 11685.937 - 11738.577: 85.9619% ( 7) 00:10:10.618 11738.577 - 11791.216: 86.0921% ( 16) 00:10:10.618 11791.216 - 11843.855: 86.2712% ( 22) 00:10:10.618 11843.855 - 11896.495: 86.4746% ( 25) 00:10:10.618 11896.495 - 11949.134: 86.7350% ( 32) 00:10:10.618 11949.134 - 12001.773: 86.9629% ( 28) 00:10:10.618 12001.773 - 12054.413: 87.1908% ( 28) 00:10:10.618 12054.413 - 12107.052: 87.2803% ( 11) 00:10:10.618 12107.052 - 12159.692: 87.4186% ( 17) 00:10:10.618 12159.692 - 12212.331: 87.5895% ( 21) 00:10:10.618 12212.331 - 12264.970: 87.8255% ( 29) 00:10:10.618 12264.970 - 12317.610: 88.0534% ( 28) 00:10:10.618 12317.610 - 12370.249: 88.3301% ( 34) 00:10:10.618 12370.249 - 12422.888: 88.5010% ( 21) 00:10:10.618 12422.888 - 12475.528: 88.6556% ( 19) 00:10:10.618 12475.528 - 12528.167: 88.8672% ( 26) 00:10:10.618 12528.167 - 12580.806: 89.0299% ( 20) 00:10:10.618 12580.806 - 12633.446: 89.1683% ( 17) 00:10:10.618 12633.446 - 12686.085: 89.3962% ( 28) 00:10:10.618 12686.085 - 12738.724: 89.5752% ( 22) 00:10:10.618 12738.724 - 12791.364: 89.7542% ( 22) 00:10:10.618 12791.364 - 12844.003: 89.8926% ( 17) 00:10:10.618 12844.003 - 12896.643: 89.9821% ( 11) 00:10:10.618 12896.643 - 12949.282: 90.0391% ( 7) 00:10:10.618 12949.282 - 13001.921: 90.1611% ( 15) 00:10:10.618 13001.921 - 13054.561: 90.2913% ( 16) 00:10:10.618 13054.561 - 13107.200: 90.4378% ( 18) 00:10:10.618 13107.200 - 13159.839: 90.4948% ( 7) 00:10:10.618 13159.839 - 13212.479: 90.5111% ( 2) 00:10:10.618 13212.479 - 13265.118: 90.5192% ( 1) 00:10:10.618 13265.118 - 13317.757: 90.5355% ( 2) 00:10:10.618 13317.757 - 13370.397: 90.5599% ( 3) 00:10:10.618 13370.397 - 13423.036: 90.5843% ( 3) 00:10:10.618 13423.036 - 13475.676: 90.6576% ( 9) 00:10:10.618 13475.676 - 13580.954: 90.8366% ( 22) 00:10:10.618 13580.954 - 13686.233: 91.0645% ( 28) 00:10:10.618 13686.233 - 13791.512: 91.3656% ( 37) 00:10:10.618 13791.512 - 13896.790: 91.6911% ( 40) 00:10:10.618 13896.790 - 14002.069: 92.0085% ( 39) 00:10:10.618 14002.069 - 14107.348: 92.2770% ( 33) 00:10:10.618 14107.348 - 14212.627: 92.6758% ( 49) 00:10:10.618 14212.627 - 14317.905: 92.9606% ( 35) 00:10:10.618 14317.905 - 14423.184: 93.2454% ( 35) 00:10:10.618 14423.184 - 14528.463: 93.4814% ( 29) 00:10:10.618 14528.463 - 14633.741: 93.6686% ( 23) 00:10:10.618 14633.741 - 14739.020: 93.8965% ( 28) 00:10:10.618 14739.020 - 14844.299: 94.0592% ( 20) 00:10:10.618 14844.299 - 14949.578: 94.2383% ( 22) 00:10:10.618 14949.578 - 15054.856: 94.3359% ( 12) 00:10:10.618 15054.856 - 15160.135: 94.5150% ( 22) 00:10:10.618 15160.135 - 15265.414: 94.7428% ( 28) 00:10:10.618 15265.414 - 15370.692: 94.8649% ( 15) 00:10:10.618 15370.692 - 15475.971: 95.0033% ( 17) 00:10:10.618 15475.971 - 15581.250: 95.1416% ( 17) 00:10:10.618 15581.250 - 15686.529: 95.2474% ( 13) 00:10:10.618 15686.529 - 15791.807: 95.2718% ( 3) 00:10:10.618 15791.807 - 15897.086: 95.2962% ( 3) 00:10:10.618 15897.086 - 16002.365: 95.3776% ( 10) 00:10:10.618 16002.365 - 16107.643: 95.5485% ( 21) 00:10:10.618 16107.643 - 16212.922: 95.6868% ( 17) 00:10:10.618 16212.922 - 16318.201: 95.7113% ( 3) 00:10:10.618 16318.201 - 16423.480: 95.7357% ( 3) 00:10:10.618 16423.480 - 16528.758: 95.7601% ( 3) 00:10:10.618 16528.758 - 16634.037: 95.8740% ( 14) 00:10:10.618 16634.037 - 16739.316: 95.9473% ( 9) 00:10:10.618 16739.316 - 16844.594: 96.0042% ( 7) 00:10:10.618 16844.594 - 16949.873: 96.0531% ( 6) 00:10:10.618 16949.873 - 17055.152: 96.1100% ( 7) 00:10:10.618 17055.152 - 17160.431: 96.3704% ( 32) 00:10:10.618 17160.431 - 17265.709: 96.5658% ( 24) 00:10:10.618 17265.709 - 17370.988: 96.6715% ( 13) 00:10:10.618 17370.988 - 17476.267: 96.8343% ( 20) 00:10:10.618 17476.267 - 17581.545: 97.0540% ( 27) 00:10:10.618 17581.545 - 17686.824: 97.1517% ( 12) 00:10:10.618 17686.824 - 17792.103: 97.2900% ( 17) 00:10:10.618 17792.103 - 17897.382: 97.4040% ( 14) 00:10:10.618 17897.382 - 18002.660: 97.4609% ( 7) 00:10:10.618 18002.660 - 18107.939: 97.5260% ( 8) 00:10:10.618 18107.939 - 18213.218: 97.5911% ( 8) 00:10:10.618 18213.218 - 18318.496: 97.7051% ( 14) 00:10:10.618 18318.496 - 18423.775: 97.8597% ( 19) 00:10:10.618 18423.775 - 18529.054: 97.9167% ( 7) 00:10:10.618 18529.054 - 18634.333: 98.0225% ( 13) 00:10:10.618 18634.333 - 18739.611: 98.1689% ( 18) 00:10:10.618 18739.611 - 18844.890: 98.3073% ( 17) 00:10:10.618 18844.890 - 18950.169: 98.4619% ( 19) 00:10:10.618 18950.169 - 19055.447: 98.7061% ( 30) 00:10:10.618 19055.447 - 19160.726: 98.8444% ( 17) 00:10:10.618 19160.726 - 19266.005: 98.9339% ( 11) 00:10:10.618 19266.005 - 19371.284: 98.9583% ( 3) 00:10:10.619 29478.040 - 29688.598: 99.0072% ( 6) 00:10:10.619 29688.598 - 29899.155: 99.0723% ( 8) 00:10:10.619 29899.155 - 30109.712: 99.1292% ( 7) 00:10:10.619 30109.712 - 30320.270: 99.1862% ( 7) 00:10:10.619 30320.270 - 30530.827: 99.2513% ( 8) 00:10:10.619 30530.827 - 30741.385: 99.3245% ( 9) 00:10:10.619 30741.385 - 30951.942: 99.3896% ( 8) 00:10:10.619 30951.942 - 31162.500: 99.4548% ( 8) 00:10:10.619 31162.500 - 31373.057: 99.4792% ( 3) 00:10:10.619 37479.222 - 37689.780: 99.5117% ( 4) 00:10:10.619 37689.780 - 37900.337: 99.5850% ( 9) 00:10:10.619 37900.337 - 38110.895: 99.6419% ( 7) 00:10:10.619 38110.895 - 38321.452: 99.7070% ( 8) 00:10:10.619 38321.452 - 38532.010: 99.7640% ( 7) 00:10:10.619 38532.010 - 38742.567: 99.8291% ( 8) 00:10:10.619 38742.567 - 38953.124: 99.8942% ( 8) 00:10:10.619 38953.124 - 39163.682: 99.9512% ( 7) 00:10:10.619 39163.682 - 39374.239: 100.0000% ( 6) 00:10:10.619 00:10:10.619 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:10.619 ============================================================================== 00:10:10.619 Range in us Cumulative IO count 00:10:10.619 8317.018 - 8369.658: 0.0081% ( 1) 00:10:10.619 8369.658 - 8422.297: 0.0570% ( 6) 00:10:10.619 8422.297 - 8474.937: 0.0895% ( 4) 00:10:10.619 8474.937 - 8527.576: 0.2116% ( 15) 00:10:10.619 8527.576 - 8580.215: 0.3743% ( 20) 00:10:10.619 8580.215 - 8632.855: 0.7243% ( 43) 00:10:10.619 8632.855 - 8685.494: 1.2939% ( 70) 00:10:10.619 8685.494 - 8738.133: 2.2054% ( 112) 00:10:10.619 8738.133 - 8790.773: 3.2552% ( 129) 00:10:10.619 8790.773 - 8843.412: 4.7119% ( 179) 00:10:10.619 8843.412 - 8896.051: 6.4453% ( 213) 00:10:10.619 8896.051 - 8948.691: 8.4147% ( 242) 00:10:10.619 8948.691 - 9001.330: 10.7178% ( 283) 00:10:10.619 9001.330 - 9053.969: 13.3382% ( 322) 00:10:10.619 9053.969 - 9106.609: 15.9505% ( 321) 00:10:10.619 9106.609 - 9159.248: 19.2301% ( 403) 00:10:10.619 9159.248 - 9211.888: 22.3877% ( 388) 00:10:10.619 9211.888 - 9264.527: 25.9521% ( 438) 00:10:10.619 9264.527 - 9317.166: 30.0212% ( 500) 00:10:10.619 9317.166 - 9369.806: 34.5703% ( 559) 00:10:10.619 9369.806 - 9422.445: 40.2507% ( 698) 00:10:10.619 9422.445 - 9475.084: 46.5413% ( 773) 00:10:10.619 9475.084 - 9527.724: 52.9867% ( 792) 00:10:10.619 9527.724 - 9580.363: 59.4482% ( 794) 00:10:10.619 9580.363 - 9633.002: 65.0879% ( 693) 00:10:10.619 9633.002 - 9685.642: 69.5068% ( 543) 00:10:10.619 9685.642 - 9738.281: 73.4701% ( 487) 00:10:10.619 9738.281 - 9790.920: 76.0579% ( 318) 00:10:10.619 9790.920 - 9843.560: 77.9053% ( 227) 00:10:10.619 9843.560 - 9896.199: 79.1748% ( 156) 00:10:10.619 9896.199 - 9948.839: 80.0130% ( 103) 00:10:10.619 9948.839 - 10001.478: 80.5176% ( 62) 00:10:10.619 10001.478 - 10054.117: 80.9245% ( 50) 00:10:10.619 10054.117 - 10106.757: 81.3965% ( 58) 00:10:10.619 10106.757 - 10159.396: 81.8685% ( 58) 00:10:10.619 10159.396 - 10212.035: 82.1045% ( 29) 00:10:10.619 10212.035 - 10264.675: 82.2754% ( 21) 00:10:10.619 10264.675 - 10317.314: 82.5033% ( 28) 00:10:10.619 10317.314 - 10369.953: 82.6172% ( 14) 00:10:10.619 10369.953 - 10422.593: 82.7230% ( 13) 00:10:10.619 10422.593 - 10475.232: 82.8451% ( 15) 00:10:10.619 10475.232 - 10527.871: 82.9834% ( 17) 00:10:10.619 10527.871 - 10580.511: 83.0729% ( 11) 00:10:10.619 10580.511 - 10633.150: 83.2357% ( 20) 00:10:10.619 10633.150 - 10685.790: 83.3415% ( 13) 00:10:10.619 10685.790 - 10738.429: 83.4147% ( 9) 00:10:10.619 10738.429 - 10791.068: 83.4880% ( 9) 00:10:10.619 10791.068 - 10843.708: 83.6100% ( 15) 00:10:10.619 10843.708 - 10896.347: 83.8379% ( 28) 00:10:10.619 10896.347 - 10948.986: 84.1227% ( 35) 00:10:10.619 10948.986 - 11001.626: 84.4157% ( 36) 00:10:10.619 11001.626 - 11054.265: 84.5703% ( 19) 00:10:10.619 11054.265 - 11106.904: 84.6191% ( 6) 00:10:10.619 11106.904 - 11159.544: 84.6761% ( 7) 00:10:10.619 11159.544 - 11212.183: 84.7249% ( 6) 00:10:10.619 11212.183 - 11264.822: 84.7900% ( 8) 00:10:10.619 11264.822 - 11317.462: 84.8389% ( 6) 00:10:10.619 11317.462 - 11370.101: 84.9202% ( 10) 00:10:10.619 11370.101 - 11422.741: 84.9935% ( 9) 00:10:10.619 11422.741 - 11475.380: 85.1318% ( 17) 00:10:10.619 11475.380 - 11528.019: 85.2946% ( 20) 00:10:10.619 11528.019 - 11580.659: 85.4492% ( 19) 00:10:10.619 11580.659 - 11633.298: 85.7503% ( 37) 00:10:10.619 11633.298 - 11685.937: 86.0758% ( 40) 00:10:10.619 11685.937 - 11738.577: 86.2793% ( 25) 00:10:10.619 11738.577 - 11791.216: 86.5072% ( 28) 00:10:10.619 11791.216 - 11843.855: 86.6699% ( 20) 00:10:10.619 11843.855 - 11896.495: 86.8164% ( 18) 00:10:10.619 11896.495 - 11949.134: 86.9466% ( 16) 00:10:10.619 11949.134 - 12001.773: 87.2233% ( 34) 00:10:10.619 12001.773 - 12054.413: 87.3291% ( 13) 00:10:10.619 12054.413 - 12107.052: 87.4349% ( 13) 00:10:10.619 12107.052 - 12159.692: 87.5488% ( 14) 00:10:10.619 12159.692 - 12212.331: 87.6790% ( 16) 00:10:10.619 12212.331 - 12264.970: 87.8337% ( 19) 00:10:10.619 12264.970 - 12317.610: 87.9395% ( 13) 00:10:10.619 12317.610 - 12370.249: 88.0371% ( 12) 00:10:10.619 12370.249 - 12422.888: 88.0941% ( 7) 00:10:10.619 12422.888 - 12475.528: 88.1104% ( 2) 00:10:10.619 12475.528 - 12528.167: 88.1510% ( 5) 00:10:10.619 12528.167 - 12580.806: 88.2324% ( 10) 00:10:10.619 12580.806 - 12633.446: 88.3057% ( 9) 00:10:10.619 12633.446 - 12686.085: 88.3952% ( 11) 00:10:10.619 12686.085 - 12738.724: 88.5173% ( 15) 00:10:10.619 12738.724 - 12791.364: 88.6556% ( 17) 00:10:10.619 12791.364 - 12844.003: 88.7126% ( 7) 00:10:10.619 12844.003 - 12896.643: 88.7939% ( 10) 00:10:10.619 12896.643 - 12949.282: 88.8835% ( 11) 00:10:10.619 12949.282 - 13001.921: 89.0544% ( 21) 00:10:10.619 13001.921 - 13054.561: 89.2904% ( 29) 00:10:10.619 13054.561 - 13107.200: 89.4287% ( 17) 00:10:10.619 13107.200 - 13159.839: 89.6077% ( 22) 00:10:10.619 13159.839 - 13212.479: 89.7705% ( 20) 00:10:10.619 13212.479 - 13265.118: 89.9740% ( 25) 00:10:10.619 13265.118 - 13317.757: 90.2018% ( 28) 00:10:10.619 13317.757 - 13370.397: 90.5436% ( 42) 00:10:10.619 13370.397 - 13423.036: 90.7796% ( 29) 00:10:10.619 13423.036 - 13475.676: 91.0889% ( 38) 00:10:10.619 13475.676 - 13580.954: 91.4632% ( 46) 00:10:10.619 13580.954 - 13686.233: 91.7806% ( 39) 00:10:10.619 13686.233 - 13791.512: 91.9678% ( 23) 00:10:10.619 13791.512 - 13896.790: 92.1387% ( 21) 00:10:10.619 13896.790 - 14002.069: 92.3258% ( 23) 00:10:10.619 14002.069 - 14107.348: 92.5130% ( 23) 00:10:10.619 14107.348 - 14212.627: 92.7165% ( 25) 00:10:10.619 14212.627 - 14317.905: 92.9606% ( 30) 00:10:10.619 14317.905 - 14423.184: 93.0908% ( 16) 00:10:10.619 14423.184 - 14528.463: 93.2454% ( 19) 00:10:10.619 14528.463 - 14633.741: 93.5872% ( 42) 00:10:10.619 14633.741 - 14739.020: 93.9860% ( 49) 00:10:10.619 14739.020 - 14844.299: 94.3115% ( 40) 00:10:10.619 14844.299 - 14949.578: 94.5068% ( 24) 00:10:10.619 14949.578 - 15054.856: 94.6289% ( 15) 00:10:10.619 15054.856 - 15160.135: 94.7103% ( 10) 00:10:10.619 15160.135 - 15265.414: 94.7347% ( 3) 00:10:10.619 15265.414 - 15370.692: 94.7591% ( 3) 00:10:10.619 15370.692 - 15475.971: 94.8324% ( 9) 00:10:10.619 15475.971 - 15581.250: 94.9300% ( 12) 00:10:10.619 15581.250 - 15686.529: 95.0114% ( 10) 00:10:10.619 15686.529 - 15791.807: 95.0765% ( 8) 00:10:10.619 15791.807 - 15897.086: 95.1335% ( 7) 00:10:10.619 15897.086 - 16002.365: 95.1986% ( 8) 00:10:10.619 16002.365 - 16107.643: 95.5241% ( 40) 00:10:10.619 16107.643 - 16212.922: 95.7926% ( 33) 00:10:10.619 16212.922 - 16318.201: 95.8577% ( 8) 00:10:10.619 16318.201 - 16423.480: 95.9066% ( 6) 00:10:10.619 16423.480 - 16528.758: 95.9798% ( 9) 00:10:10.619 16528.758 - 16634.037: 96.0612% ( 10) 00:10:10.619 16634.037 - 16739.316: 96.1100% ( 6) 00:10:10.619 16739.316 - 16844.594: 96.1589% ( 6) 00:10:10.619 16844.594 - 16949.873: 96.2321% ( 9) 00:10:10.619 16949.873 - 17055.152: 96.3053% ( 9) 00:10:10.619 17055.152 - 17160.431: 96.5576% ( 31) 00:10:10.620 17160.431 - 17265.709: 96.6146% ( 7) 00:10:10.620 17265.709 - 17370.988: 96.6471% ( 4) 00:10:10.620 17370.988 - 17476.267: 96.6715% ( 3) 00:10:10.620 17476.267 - 17581.545: 96.7855% ( 14) 00:10:10.620 17581.545 - 17686.824: 96.9482% ( 20) 00:10:10.620 17686.824 - 17792.103: 97.1517% ( 25) 00:10:10.620 17792.103 - 17897.382: 97.2982% ( 18) 00:10:10.620 17897.382 - 18002.660: 97.4609% ( 20) 00:10:10.620 18002.660 - 18107.939: 97.7295% ( 33) 00:10:10.620 18107.939 - 18213.218: 97.9085% ( 22) 00:10:10.620 18213.218 - 18318.496: 98.0387% ( 16) 00:10:10.620 18318.496 - 18423.775: 98.1527% ( 14) 00:10:10.620 18423.775 - 18529.054: 98.2422% ( 11) 00:10:10.620 18529.054 - 18634.333: 98.3154% ( 9) 00:10:10.620 18634.333 - 18739.611: 98.3968% ( 10) 00:10:10.620 18739.611 - 18844.890: 98.4456% ( 6) 00:10:10.620 18950.169 - 19055.447: 98.4538% ( 1) 00:10:10.620 19055.447 - 19160.726: 98.5189% ( 8) 00:10:10.620 19160.726 - 19266.005: 98.6165% ( 12) 00:10:10.620 19266.005 - 19371.284: 98.7467% ( 16) 00:10:10.620 19371.284 - 19476.562: 98.8525% ( 13) 00:10:10.620 19476.562 - 19581.841: 98.9176% ( 8) 00:10:10.620 19581.841 - 19687.120: 98.9583% ( 5) 00:10:10.620 28214.696 - 28425.253: 99.0234% ( 8) 00:10:10.620 28425.253 - 28635.810: 99.0804% ( 7) 00:10:10.620 28635.810 - 28846.368: 99.1455% ( 8) 00:10:10.620 28846.368 - 29056.925: 99.2025% ( 7) 00:10:10.620 29056.925 - 29267.483: 99.2676% ( 8) 00:10:10.620 29267.483 - 29478.040: 99.3245% ( 7) 00:10:10.620 29478.040 - 29688.598: 99.3815% ( 7) 00:10:10.620 29688.598 - 29899.155: 99.4385% ( 7) 00:10:10.620 29899.155 - 30109.712: 99.4792% ( 5) 00:10:10.620 36005.320 - 36215.878: 99.5199% ( 5) 00:10:10.620 36215.878 - 36426.435: 99.5850% ( 8) 00:10:10.620 36426.435 - 36636.993: 99.6419% ( 7) 00:10:10.620 36636.993 - 36847.550: 99.6989% ( 7) 00:10:10.620 36847.550 - 37058.108: 99.7559% ( 7) 00:10:10.620 37058.108 - 37268.665: 99.8210% ( 8) 00:10:10.620 37268.665 - 37479.222: 99.8861% ( 8) 00:10:10.620 37479.222 - 37689.780: 99.9430% ( 7) 00:10:10.620 37689.780 - 37900.337: 100.0000% ( 7) 00:10:10.620 00:10:10.620 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:10.620 ============================================================================== 00:10:10.620 Range in us Cumulative IO count 00:10:10.620 8211.740 - 8264.379: 0.0081% ( 1) 00:10:10.620 8317.018 - 8369.658: 0.0324% ( 3) 00:10:10.620 8369.658 - 8422.297: 0.0405% ( 1) 00:10:10.620 8422.297 - 8474.937: 0.0729% ( 4) 00:10:10.620 8474.937 - 8527.576: 0.1457% ( 9) 00:10:10.620 8527.576 - 8580.215: 0.4453% ( 37) 00:10:10.620 8580.215 - 8632.855: 1.0039% ( 69) 00:10:10.620 8632.855 - 8685.494: 1.8782% ( 108) 00:10:10.620 8685.494 - 8738.133: 2.9469% ( 132) 00:10:10.620 8738.133 - 8790.773: 4.2179% ( 157) 00:10:10.620 8790.773 - 8843.412: 5.6185% ( 173) 00:10:10.620 8843.412 - 8896.051: 7.3834% ( 218) 00:10:10.620 8896.051 - 8948.691: 9.4964% ( 261) 00:10:10.620 8948.691 - 9001.330: 11.3990% ( 235) 00:10:10.620 9001.330 - 9053.969: 13.6253% ( 275) 00:10:10.620 9053.969 - 9106.609: 15.4793% ( 229) 00:10:10.620 9106.609 - 9159.248: 17.5518% ( 256) 00:10:10.620 9159.248 - 9211.888: 20.7578% ( 396) 00:10:10.620 9211.888 - 9264.527: 24.9109% ( 513) 00:10:10.620 9264.527 - 9317.166: 29.5256% ( 570) 00:10:10.620 9317.166 - 9369.806: 34.5126% ( 616) 00:10:10.620 9369.806 - 9422.445: 40.9165% ( 791) 00:10:10.620 9422.445 - 9475.084: 47.3041% ( 789) 00:10:10.620 9475.084 - 9527.724: 53.5541% ( 772) 00:10:10.620 9527.724 - 9580.363: 59.4802% ( 732) 00:10:10.620 9580.363 - 9633.002: 64.6940% ( 644) 00:10:10.620 9633.002 - 9685.642: 69.0415% ( 537) 00:10:10.620 9685.642 - 9738.281: 72.5470% ( 433) 00:10:10.620 9738.281 - 9790.920: 75.3238% ( 343) 00:10:10.620 9790.920 - 9843.560: 77.4207% ( 259) 00:10:10.620 9843.560 - 9896.199: 78.7160% ( 160) 00:10:10.620 9896.199 - 9948.839: 79.7442% ( 127) 00:10:10.620 9948.839 - 10001.478: 80.3514% ( 75) 00:10:10.620 10001.478 - 10054.117: 80.6833% ( 41) 00:10:10.620 10054.117 - 10106.757: 80.9505% ( 33) 00:10:10.620 10106.757 - 10159.396: 81.0800% ( 16) 00:10:10.620 10159.396 - 10212.035: 81.2905% ( 26) 00:10:10.620 10212.035 - 10264.675: 81.4929% ( 25) 00:10:10.620 10264.675 - 10317.314: 81.8005% ( 38) 00:10:10.620 10317.314 - 10369.953: 82.0191% ( 27) 00:10:10.620 10369.953 - 10422.593: 82.2377% ( 27) 00:10:10.620 10422.593 - 10475.232: 82.4320% ( 24) 00:10:10.620 10475.232 - 10527.871: 82.5939% ( 20) 00:10:10.620 10527.871 - 10580.511: 82.7963% ( 25) 00:10:10.620 10580.511 - 10633.150: 83.0149% ( 27) 00:10:10.620 10633.150 - 10685.790: 83.1120% ( 12) 00:10:10.620 10685.790 - 10738.429: 83.1930% ( 10) 00:10:10.620 10738.429 - 10791.068: 83.3873% ( 24) 00:10:10.620 10791.068 - 10843.708: 83.6140% ( 28) 00:10:10.620 10843.708 - 10896.347: 83.7111% ( 12) 00:10:10.620 10896.347 - 10948.986: 83.7921% ( 10) 00:10:10.620 10948.986 - 11001.626: 83.8488% ( 7) 00:10:10.620 11001.626 - 11054.265: 83.9135% ( 8) 00:10:10.620 11054.265 - 11106.904: 83.9702% ( 7) 00:10:10.620 11106.904 - 11159.544: 84.0350% ( 8) 00:10:10.620 11159.544 - 11212.183: 84.1078% ( 9) 00:10:10.620 11212.183 - 11264.822: 84.1969% ( 11) 00:10:10.620 11264.822 - 11317.462: 84.3102% ( 14) 00:10:10.620 11317.462 - 11370.101: 84.5855% ( 34) 00:10:10.620 11370.101 - 11422.741: 84.8446% ( 32) 00:10:10.620 11422.741 - 11475.380: 85.0712% ( 28) 00:10:10.620 11475.380 - 11528.019: 85.4356% ( 45) 00:10:10.620 11528.019 - 11580.659: 85.6460% ( 26) 00:10:10.620 11580.659 - 11633.298: 85.7351% ( 11) 00:10:10.620 11633.298 - 11685.937: 85.8161% ( 10) 00:10:10.620 11685.937 - 11738.577: 85.9051% ( 11) 00:10:10.620 11738.577 - 11791.216: 85.9375% ( 4) 00:10:10.620 11791.216 - 11843.855: 85.9780% ( 5) 00:10:10.620 11843.855 - 11896.495: 86.0347% ( 7) 00:10:10.620 11896.495 - 11949.134: 86.1075% ( 9) 00:10:10.620 11949.134 - 12001.773: 86.1966% ( 11) 00:10:10.620 12001.773 - 12054.413: 86.3018% ( 13) 00:10:10.620 12054.413 - 12107.052: 86.4637% ( 20) 00:10:10.620 12107.052 - 12159.692: 86.6095% ( 18) 00:10:10.620 12159.692 - 12212.331: 86.7309% ( 15) 00:10:10.620 12212.331 - 12264.970: 86.9009% ( 21) 00:10:10.620 12264.970 - 12317.610: 87.0304% ( 16) 00:10:10.620 12317.610 - 12370.249: 87.1195% ( 11) 00:10:10.620 12370.249 - 12422.888: 87.2571% ( 17) 00:10:10.620 12422.888 - 12475.528: 87.2976% ( 5) 00:10:10.620 12475.528 - 12528.167: 87.3462% ( 6) 00:10:10.620 12528.167 - 12580.806: 87.4433% ( 12) 00:10:10.620 12580.806 - 12633.446: 87.5081% ( 8) 00:10:10.620 12633.446 - 12686.085: 87.6295% ( 15) 00:10:10.620 12686.085 - 12738.724: 87.7995% ( 21) 00:10:10.620 12738.724 - 12791.364: 87.9938% ( 24) 00:10:10.620 12791.364 - 12844.003: 88.1234% ( 16) 00:10:10.620 12844.003 - 12896.643: 88.2853% ( 20) 00:10:10.620 12896.643 - 12949.282: 88.4148% ( 16) 00:10:10.620 12949.282 - 13001.921: 88.6658% ( 31) 00:10:10.620 13001.921 - 13054.561: 88.8358% ( 21) 00:10:10.620 13054.561 - 13107.200: 89.1273% ( 36) 00:10:10.620 13107.200 - 13159.839: 89.4511% ( 40) 00:10:10.620 13159.839 - 13212.479: 89.7426% ( 36) 00:10:10.620 13212.479 - 13265.118: 89.9854% ( 30) 00:10:10.620 13265.118 - 13317.757: 90.1878% ( 25) 00:10:10.620 13317.757 - 13370.397: 90.4145% ( 28) 00:10:10.620 13370.397 - 13423.036: 90.6817% ( 33) 00:10:10.620 13423.036 - 13475.676: 90.8679% ( 23) 00:10:10.620 13475.676 - 13580.954: 91.1108% ( 30) 00:10:10.620 13580.954 - 13686.233: 91.2727% ( 20) 00:10:10.620 13686.233 - 13791.512: 91.3779% ( 13) 00:10:10.620 13791.512 - 13896.790: 91.4913% ( 14) 00:10:10.620 13896.790 - 14002.069: 91.6046% ( 14) 00:10:10.620 14002.069 - 14107.348: 91.8232% ( 27) 00:10:10.620 14107.348 - 14212.627: 92.1794% ( 44) 00:10:10.620 14212.627 - 14317.905: 92.5113% ( 41) 00:10:10.620 14317.905 - 14423.184: 92.8433% ( 41) 00:10:10.620 14423.184 - 14528.463: 93.1995% ( 44) 00:10:10.620 14528.463 - 14633.741: 93.4747% ( 34) 00:10:10.620 14633.741 - 14739.020: 93.6609% ( 23) 00:10:10.620 14739.020 - 14844.299: 93.8310% ( 21) 00:10:10.620 14844.299 - 14949.578: 94.0010% ( 21) 00:10:10.620 14949.578 - 15054.856: 94.2438% ( 30) 00:10:10.620 15054.856 - 15160.135: 94.4381% ( 24) 00:10:10.620 15160.135 - 15265.414: 94.5434% ( 13) 00:10:10.620 15265.414 - 15370.692: 94.7782% ( 29) 00:10:10.620 15370.692 - 15475.971: 94.9806% ( 25) 00:10:10.620 15475.971 - 15581.250: 95.1425% ( 20) 00:10:10.620 15581.250 - 15686.529: 95.2477% ( 13) 00:10:10.620 15686.529 - 15791.807: 95.3692% ( 15) 00:10:10.620 15791.807 - 15897.086: 95.5149% ( 18) 00:10:10.620 15897.086 - 16002.365: 95.6201% ( 13) 00:10:10.620 16002.365 - 16107.643: 95.7011% ( 10) 00:10:10.620 16107.643 - 16212.922: 95.7821% ( 10) 00:10:10.620 16212.922 - 16318.201: 95.8630% ( 10) 00:10:10.620 16318.201 - 16423.480: 96.0087% ( 18) 00:10:10.620 16423.480 - 16528.758: 96.0816% ( 9) 00:10:10.620 16528.758 - 16634.037: 96.1302% ( 6) 00:10:10.620 16634.037 - 16739.316: 96.2273% ( 12) 00:10:10.620 16739.316 - 16844.594: 96.3326% ( 13) 00:10:10.620 16844.594 - 16949.873: 96.3731% ( 5) 00:10:10.620 16949.873 - 17055.152: 96.4459% ( 9) 00:10:10.620 17055.152 - 17160.431: 96.6078% ( 20) 00:10:10.620 17160.431 - 17265.709: 96.7778% ( 21) 00:10:10.620 17265.709 - 17370.988: 96.8912% ( 14) 00:10:10.620 17370.988 - 17476.267: 97.2312% ( 42) 00:10:10.620 17476.267 - 17581.545: 97.3608% ( 16) 00:10:10.620 17581.545 - 17686.824: 97.4660% ( 13) 00:10:10.620 17686.824 - 17792.103: 97.5793% ( 14) 00:10:10.620 17792.103 - 17897.382: 97.6846% ( 13) 00:10:10.620 17897.382 - 18002.660: 97.8303% ( 18) 00:10:10.620 18002.660 - 18107.939: 97.9517% ( 15) 00:10:10.620 18107.939 - 18213.218: 98.0570% ( 13) 00:10:10.620 18213.218 - 18318.496: 98.0975% ( 5) 00:10:10.621 18318.496 - 18423.775: 98.2108% ( 14) 00:10:10.621 18423.775 - 18529.054: 98.3323% ( 15) 00:10:10.621 18529.054 - 18634.333: 98.3889% ( 7) 00:10:10.621 18634.333 - 18739.611: 98.4213% ( 4) 00:10:10.621 18739.611 - 18844.890: 98.4456% ( 3) 00:10:10.621 19160.726 - 19266.005: 98.4537% ( 1) 00:10:10.621 19266.005 - 19371.284: 98.4699% ( 2) 00:10:10.621 19371.284 - 19476.562: 98.5266% ( 7) 00:10:10.621 19476.562 - 19581.841: 98.6237% ( 12) 00:10:10.621 19581.841 - 19687.120: 98.8666% ( 30) 00:10:10.621 19687.120 - 19792.398: 98.9799% ( 14) 00:10:10.621 19792.398 - 19897.677: 99.0690% ( 11) 00:10:10.621 19897.677 - 20002.956: 99.1256% ( 7) 00:10:10.621 20002.956 - 20108.235: 99.1580% ( 4) 00:10:10.621 20108.235 - 20213.513: 99.1904% ( 4) 00:10:10.621 20213.513 - 20318.792: 99.2228% ( 4) 00:10:10.621 20318.792 - 20424.071: 99.2552% ( 4) 00:10:10.621 20424.071 - 20529.349: 99.2876% ( 4) 00:10:10.621 20529.349 - 20634.628: 99.3199% ( 4) 00:10:10.621 20634.628 - 20739.907: 99.3523% ( 4) 00:10:10.621 20739.907 - 20845.186: 99.3847% ( 4) 00:10:10.621 20845.186 - 20950.464: 99.4171% ( 4) 00:10:10.621 20950.464 - 21055.743: 99.4495% ( 4) 00:10:10.621 21055.743 - 21161.022: 99.4819% ( 4) 00:10:10.621 27583.023 - 27793.581: 99.5062% ( 3) 00:10:10.621 27793.581 - 28004.138: 99.5709% ( 8) 00:10:10.621 28004.138 - 28214.696: 99.6276% ( 7) 00:10:10.621 28214.696 - 28425.253: 99.7005% ( 9) 00:10:10.621 28425.253 - 28635.810: 99.7571% ( 7) 00:10:10.621 28635.810 - 28846.368: 99.8219% ( 8) 00:10:10.621 28846.368 - 29056.925: 99.8867% ( 8) 00:10:10.621 29056.925 - 29267.483: 99.9433% ( 7) 00:10:10.621 29267.483 - 29478.040: 100.0000% ( 7) 00:10:10.621 00:10:10.621 08:30:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:10.621 00:10:10.621 real 0m2.684s 00:10:10.621 user 0m2.265s 00:10:10.621 sys 0m0.300s 00:10:10.621 08:30:45 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.621 08:30:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:10.621 ************************************ 00:10:10.621 END TEST nvme_perf 00:10:10.621 ************************************ 00:10:10.621 08:30:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:10.621 08:30:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:10.621 08:30:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.621 08:30:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.621 ************************************ 00:10:10.621 START TEST nvme_hello_world 00:10:10.621 ************************************ 00:10:10.621 08:30:45 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:10.880 Initializing NVMe Controllers 00:10:10.880 Attached to 0000:00:10.0 00:10:10.880 Namespace ID: 1 size: 6GB 00:10:10.880 Attached to 0000:00:11.0 00:10:10.881 Namespace ID: 1 size: 5GB 00:10:10.881 Attached to 0000:00:13.0 00:10:10.881 Namespace ID: 1 size: 1GB 00:10:10.881 Attached to 0000:00:12.0 00:10:10.881 Namespace ID: 1 size: 4GB 00:10:10.881 Namespace ID: 2 size: 4GB 00:10:10.881 Namespace ID: 3 size: 4GB 00:10:10.881 Initialization complete. 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 INFO: using host memory buffer for IO 00:10:10.881 Hello world! 00:10:10.881 00:10:10.881 real 0m0.317s 00:10:10.881 user 0m0.121s 00:10:10.881 sys 0m0.147s 00:10:10.881 08:30:45 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.881 08:30:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:10.881 ************************************ 00:10:10.881 END TEST nvme_hello_world 00:10:10.881 ************************************ 00:10:10.881 08:30:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:10.881 08:30:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.881 08:30:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.881 08:30:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.881 ************************************ 00:10:10.881 START TEST nvme_sgl 00:10:10.881 ************************************ 00:10:10.881 08:30:45 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:11.140 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:11.140 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:11.140 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:11.140 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:11.140 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:11.140 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:11.140 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:11.140 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:11.140 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:11.140 NVMe Readv/Writev Request test 00:10:11.140 Attached to 0000:00:10.0 00:10:11.140 Attached to 0000:00:11.0 00:10:11.140 Attached to 0000:00:13.0 00:10:11.140 Attached to 0000:00:12.0 00:10:11.140 0000:00:10.0: build_io_request_2 test passed 00:10:11.140 0000:00:10.0: build_io_request_4 test passed 00:10:11.140 0000:00:10.0: build_io_request_5 test passed 00:10:11.140 0000:00:10.0: build_io_request_6 test passed 00:10:11.140 0000:00:10.0: build_io_request_7 test passed 00:10:11.140 0000:00:10.0: build_io_request_10 test passed 00:10:11.140 0000:00:11.0: build_io_request_2 test passed 00:10:11.140 0000:00:11.0: build_io_request_4 test passed 00:10:11.140 0000:00:11.0: build_io_request_5 test passed 00:10:11.140 0000:00:11.0: build_io_request_6 test passed 00:10:11.140 0000:00:11.0: build_io_request_7 test passed 00:10:11.140 0000:00:11.0: build_io_request_10 test passed 00:10:11.140 Cleaning up... 00:10:11.140 00:10:11.140 real 0m0.387s 00:10:11.140 user 0m0.189s 00:10:11.140 sys 0m0.151s 00:10:11.140 08:30:46 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.140 08:30:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:11.140 ************************************ 00:10:11.140 END TEST nvme_sgl 00:10:11.140 ************************************ 00:10:11.399 08:30:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:11.399 08:30:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.399 08:30:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.399 08:30:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.399 ************************************ 00:10:11.399 START TEST nvme_e2edp 00:10:11.399 ************************************ 00:10:11.399 08:30:46 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:11.658 NVMe Write/Read with End-to-End data protection test 00:10:11.658 Attached to 0000:00:10.0 00:10:11.658 Attached to 0000:00:11.0 00:10:11.658 Attached to 0000:00:13.0 00:10:11.658 Attached to 0000:00:12.0 00:10:11.658 Cleaning up... 00:10:11.658 00:10:11.658 real 0m0.299s 00:10:11.658 user 0m0.105s 00:10:11.658 sys 0m0.147s 00:10:11.658 08:30:46 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.658 08:30:46 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:11.658 ************************************ 00:10:11.658 END TEST nvme_e2edp 00:10:11.658 ************************************ 00:10:11.658 08:30:46 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:11.658 08:30:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.658 08:30:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.658 08:30:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.658 ************************************ 00:10:11.658 START TEST nvme_reserve 00:10:11.658 ************************************ 00:10:11.658 08:30:46 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:11.917 ===================================================== 00:10:11.917 NVMe Controller at PCI bus 0, device 16, function 0 00:10:11.917 ===================================================== 00:10:11.917 Reservations: Not Supported 00:10:11.917 ===================================================== 00:10:11.917 NVMe Controller at PCI bus 0, device 17, function 0 00:10:11.917 ===================================================== 00:10:11.917 Reservations: Not Supported 00:10:11.917 ===================================================== 00:10:11.917 NVMe Controller at PCI bus 0, device 19, function 0 00:10:11.917 ===================================================== 00:10:11.917 Reservations: Not Supported 00:10:11.917 ===================================================== 00:10:11.917 NVMe Controller at PCI bus 0, device 18, function 0 00:10:11.917 ===================================================== 00:10:11.917 Reservations: Not Supported 00:10:11.917 Reservation test passed 00:10:11.917 00:10:11.917 real 0m0.293s 00:10:11.917 user 0m0.105s 00:10:11.917 sys 0m0.143s 00:10:11.917 08:30:46 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.917 08:30:46 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:11.917 ************************************ 00:10:11.917 END TEST nvme_reserve 00:10:11.917 ************************************ 00:10:11.917 08:30:46 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:11.917 08:30:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.917 08:30:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.917 08:30:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.176 ************************************ 00:10:12.176 START TEST nvme_err_injection 00:10:12.176 ************************************ 00:10:12.176 08:30:47 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:12.435 NVMe Error Injection test 00:10:12.435 Attached to 0000:00:10.0 00:10:12.435 Attached to 0000:00:11.0 00:10:12.435 Attached to 0000:00:13.0 00:10:12.435 Attached to 0000:00:12.0 00:10:12.435 0000:00:13.0: get features failed as expected 00:10:12.435 0000:00:12.0: get features failed as expected 00:10:12.435 0000:00:10.0: get features failed as expected 00:10:12.435 0000:00:11.0: get features failed as expected 00:10:12.435 0000:00:10.0: get features successfully as expected 00:10:12.435 0000:00:11.0: get features successfully as expected 00:10:12.435 0000:00:13.0: get features successfully as expected 00:10:12.436 0000:00:12.0: get features successfully as expected 00:10:12.436 0000:00:10.0: read failed as expected 00:10:12.436 0000:00:11.0: read failed as expected 00:10:12.436 0000:00:13.0: read failed as expected 00:10:12.436 0000:00:12.0: read failed as expected 00:10:12.436 0000:00:10.0: read successfully as expected 00:10:12.436 0000:00:11.0: read successfully as expected 00:10:12.436 0000:00:13.0: read successfully as expected 00:10:12.436 0000:00:12.0: read successfully as expected 00:10:12.436 Cleaning up... 00:10:12.436 ************************************ 00:10:12.436 END TEST nvme_err_injection 00:10:12.436 ************************************ 00:10:12.436 00:10:12.436 real 0m0.326s 00:10:12.436 user 0m0.132s 00:10:12.436 sys 0m0.142s 00:10:12.436 08:30:47 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.436 08:30:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:12.436 08:30:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:12.436 08:30:47 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:10:12.436 08:30:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.436 08:30:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.436 ************************************ 00:10:12.436 START TEST nvme_overhead 00:10:12.436 ************************************ 00:10:12.436 08:30:47 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:13.816 Initializing NVMe Controllers 00:10:13.816 Attached to 0000:00:10.0 00:10:13.816 Attached to 0000:00:11.0 00:10:13.816 Attached to 0000:00:13.0 00:10:13.816 Attached to 0000:00:12.0 00:10:13.816 Initialization complete. Launching workers. 00:10:13.816 submit (in ns) avg, min, max = 14233.5, 11866.7, 109901.2 00:10:13.816 complete (in ns) avg, min, max = 8486.4, 7370.3, 84609.6 00:10:13.816 00:10:13.816 Submit histogram 00:10:13.816 ================ 00:10:13.816 Range in us Cumulative Count 00:10:13.816 11.823 - 11.875: 0.0178% ( 1) 00:10:13.816 11.978 - 12.029: 0.0356% ( 1) 00:10:13.816 12.029 - 12.080: 0.0534% ( 1) 00:10:13.816 12.080 - 12.132: 0.1425% ( 5) 00:10:13.816 12.132 - 12.183: 0.2137% ( 4) 00:10:13.816 12.183 - 12.235: 0.2849% ( 4) 00:10:13.816 12.235 - 12.286: 0.3739% ( 5) 00:10:13.816 12.286 - 12.337: 0.4630% ( 5) 00:10:13.816 12.337 - 12.389: 0.5520% ( 5) 00:10:13.816 12.389 - 12.440: 0.6944% ( 8) 00:10:13.816 12.440 - 12.492: 0.8725% ( 10) 00:10:13.816 12.492 - 12.543: 1.0506% ( 10) 00:10:13.816 12.543 - 12.594: 1.3355% ( 16) 00:10:13.816 12.594 - 12.646: 1.7094% ( 21) 00:10:13.816 12.646 - 12.697: 2.0833% ( 21) 00:10:13.816 12.697 - 12.749: 2.4751% ( 22) 00:10:13.816 12.749 - 12.800: 2.9558% ( 27) 00:10:13.816 12.800 - 12.851: 3.5791% ( 35) 00:10:13.816 12.851 - 12.903: 4.4338% ( 48) 00:10:13.816 12.903 - 12.954: 5.1104% ( 38) 00:10:13.816 12.954 - 13.006: 5.8226% ( 40) 00:10:13.816 13.006 - 13.057: 6.6417% ( 46) 00:10:13.816 13.057 - 13.108: 7.2650% ( 35) 00:10:13.816 13.108 - 13.160: 8.1731% ( 51) 00:10:13.816 13.160 - 13.263: 10.2030% ( 114) 00:10:13.816 13.263 - 13.365: 13.4793% ( 184) 00:10:13.816 13.365 - 13.468: 19.2486% ( 324) 00:10:13.816 13.468 - 13.571: 28.0271% ( 493) 00:10:13.816 13.571 - 13.674: 38.8177% ( 606) 00:10:13.816 13.674 - 13.777: 49.7507% ( 614) 00:10:13.816 13.777 - 13.880: 60.4345% ( 600) 00:10:13.816 13.880 - 13.982: 69.7115% ( 521) 00:10:13.816 13.982 - 14.085: 77.5819% ( 442) 00:10:13.816 14.085 - 14.188: 83.2621% ( 319) 00:10:13.816 14.188 - 14.291: 87.5000% ( 238) 00:10:13.816 14.291 - 14.394: 90.1175% ( 147) 00:10:13.816 14.394 - 14.496: 91.7913% ( 94) 00:10:13.816 14.496 - 14.599: 92.4858% ( 39) 00:10:13.816 14.599 - 14.702: 92.8419% ( 20) 00:10:13.816 14.702 - 14.805: 93.0734% ( 13) 00:10:13.816 14.805 - 14.908: 93.2514% ( 10) 00:10:13.816 14.908 - 15.010: 93.2870% ( 2) 00:10:13.816 15.010 - 15.113: 93.3405% ( 3) 00:10:13.816 15.113 - 15.216: 93.3583% ( 1) 00:10:13.816 15.319 - 15.422: 93.3761% ( 1) 00:10:13.816 15.422 - 15.524: 93.4117% ( 2) 00:10:13.816 15.730 - 15.833: 93.4473% ( 2) 00:10:13.816 15.833 - 15.936: 93.4829% ( 2) 00:10:13.816 15.936 - 16.039: 93.5007% ( 1) 00:10:13.816 16.039 - 16.141: 93.5719% ( 4) 00:10:13.816 16.347 - 16.450: 93.6075% ( 2) 00:10:13.816 16.450 - 16.553: 93.6254% ( 1) 00:10:13.816 16.655 - 16.758: 93.6432% ( 1) 00:10:13.816 16.758 - 16.861: 93.6610% ( 1) 00:10:13.816 16.861 - 16.964: 93.6966% ( 2) 00:10:13.816 17.067 - 17.169: 93.7144% ( 1) 00:10:13.816 17.478 - 17.581: 93.7500% ( 2) 00:10:13.816 17.581 - 17.684: 93.7856% ( 2) 00:10:13.816 17.684 - 17.786: 93.8568% ( 4) 00:10:13.816 17.786 - 17.889: 93.9815% ( 7) 00:10:13.816 17.889 - 17.992: 94.1061% ( 7) 00:10:13.816 17.992 - 18.095: 94.3376% ( 13) 00:10:13.816 18.095 - 18.198: 94.4801% ( 8) 00:10:13.816 18.198 - 18.300: 94.7293% ( 14) 00:10:13.816 18.300 - 18.403: 94.9252% ( 11) 00:10:13.816 18.403 - 18.506: 95.2279% ( 17) 00:10:13.816 18.506 - 18.609: 95.4594% ( 13) 00:10:13.816 18.609 - 18.712: 95.6909% ( 13) 00:10:13.816 18.712 - 18.814: 95.9224% ( 13) 00:10:13.816 18.814 - 18.917: 96.1360% ( 12) 00:10:13.816 18.917 - 19.020: 96.2785% ( 8) 00:10:13.816 19.020 - 19.123: 96.4387% ( 9) 00:10:13.816 19.123 - 19.226: 96.5812% ( 8) 00:10:13.816 19.226 - 19.329: 96.8305% ( 14) 00:10:13.816 19.329 - 19.431: 96.9195% ( 5) 00:10:13.816 19.431 - 19.534: 97.0976% ( 10) 00:10:13.816 19.534 - 19.637: 97.1510% ( 3) 00:10:13.816 19.637 - 19.740: 97.2578% ( 6) 00:10:13.816 19.740 - 19.843: 97.3825% ( 7) 00:10:13.816 19.843 - 19.945: 97.4715% ( 5) 00:10:13.817 19.945 - 20.048: 97.6318% ( 9) 00:10:13.817 20.048 - 20.151: 97.7208% ( 5) 00:10:13.817 20.151 - 20.254: 97.7564% ( 2) 00:10:13.817 20.254 - 20.357: 97.8454% ( 5) 00:10:13.817 20.459 - 20.562: 97.9167% ( 4) 00:10:13.817 20.562 - 20.665: 97.9523% ( 2) 00:10:13.817 20.665 - 20.768: 97.9701% ( 1) 00:10:13.817 20.768 - 20.871: 98.0413% ( 4) 00:10:13.817 20.871 - 20.973: 98.1481% ( 6) 00:10:13.817 20.973 - 21.076: 98.2550% ( 6) 00:10:13.817 21.076 - 21.179: 98.3262% ( 4) 00:10:13.817 21.179 - 21.282: 98.3618% ( 2) 00:10:13.817 21.282 - 21.385: 98.3796% ( 1) 00:10:13.817 21.385 - 21.488: 98.4330% ( 3) 00:10:13.817 21.488 - 21.590: 98.4509% ( 1) 00:10:13.817 21.590 - 21.693: 98.4865% ( 2) 00:10:13.817 21.796 - 21.899: 98.5221% ( 2) 00:10:13.817 21.899 - 22.002: 98.5399% ( 1) 00:10:13.817 22.002 - 22.104: 98.5755% ( 2) 00:10:13.817 22.207 - 22.310: 98.5933% ( 1) 00:10:13.817 22.413 - 22.516: 98.6289% ( 2) 00:10:13.817 22.516 - 22.618: 98.6467% ( 1) 00:10:13.817 23.030 - 23.133: 98.6645% ( 1) 00:10:13.817 23.133 - 23.235: 98.6823% ( 1) 00:10:13.817 23.235 - 23.338: 98.7179% ( 2) 00:10:13.817 23.338 - 23.441: 98.7536% ( 2) 00:10:13.817 23.441 - 23.544: 98.7892% ( 2) 00:10:13.817 23.544 - 23.647: 98.8070% ( 1) 00:10:13.817 23.647 - 23.749: 98.8248% ( 1) 00:10:13.817 23.749 - 23.852: 98.8604% ( 2) 00:10:13.817 23.852 - 23.955: 98.9138% ( 3) 00:10:13.817 23.955 - 24.058: 98.9672% ( 3) 00:10:13.817 24.058 - 24.161: 99.0207% ( 3) 00:10:13.817 24.161 - 24.263: 99.1631% ( 8) 00:10:13.817 24.263 - 24.366: 99.1809% ( 1) 00:10:13.817 24.366 - 24.469: 99.2165% ( 2) 00:10:13.817 24.469 - 24.572: 99.3234% ( 6) 00:10:13.817 24.572 - 24.675: 99.3768% ( 3) 00:10:13.817 24.675 - 24.778: 99.4124% ( 2) 00:10:13.817 24.880 - 24.983: 99.4302% ( 1) 00:10:13.817 25.600 - 25.703: 99.4480% ( 1) 00:10:13.817 26.011 - 26.114: 99.4658% ( 1) 00:10:13.817 28.376 - 28.582: 99.4836% ( 1) 00:10:13.817 28.582 - 28.787: 99.5014% ( 1) 00:10:13.817 28.787 - 28.993: 99.5370% ( 2) 00:10:13.817 29.198 - 29.404: 99.5726% ( 2) 00:10:13.817 29.404 - 29.610: 99.6261% ( 3) 00:10:13.817 29.610 - 29.815: 99.6617% ( 2) 00:10:13.817 29.815 - 30.021: 99.7151% ( 3) 00:10:13.817 30.021 - 30.227: 99.7329% ( 1) 00:10:13.817 30.227 - 30.432: 99.7507% ( 1) 00:10:13.817 30.843 - 31.049: 99.7685% ( 1) 00:10:13.817 31.460 - 31.666: 99.7863% ( 1) 00:10:13.817 32.077 - 32.283: 99.8041% ( 1) 00:10:13.817 33.311 - 33.516: 99.8219% ( 1) 00:10:13.817 34.956 - 35.161: 99.8397% ( 1) 00:10:13.817 39.685 - 39.891: 99.8575% ( 1) 00:10:13.817 42.769 - 42.975: 99.8754% ( 1) 00:10:13.817 44.620 - 44.826: 99.8932% ( 1) 00:10:13.817 46.265 - 46.471: 99.9110% ( 1) 00:10:13.817 62.920 - 63.332: 99.9288% ( 1) 00:10:13.817 66.210 - 66.622: 99.9466% ( 1) 00:10:13.817 70.734 - 71.145: 99.9644% ( 1) 00:10:13.817 104.867 - 105.279: 99.9822% ( 1) 00:10:13.817 109.391 - 110.214: 100.0000% ( 1) 00:10:13.817 00:10:13.817 Complete histogram 00:10:13.817 ================== 00:10:13.817 Range in us Cumulative Count 00:10:13.817 7.351 - 7.402: 0.0534% ( 3) 00:10:13.817 7.454 - 7.505: 0.1425% ( 5) 00:10:13.817 7.505 - 7.557: 0.2849% ( 8) 00:10:13.817 7.557 - 7.608: 0.3739% ( 5) 00:10:13.817 7.608 - 7.659: 0.4452% ( 4) 00:10:13.817 7.659 - 7.711: 0.5876% ( 8) 00:10:13.817 7.711 - 7.762: 1.1396% ( 31) 00:10:13.817 7.762 - 7.814: 1.7628% ( 35) 00:10:13.817 7.814 - 7.865: 4.5228% ( 155) 00:10:13.817 7.865 - 7.916: 8.8675% ( 244) 00:10:13.817 7.916 - 7.968: 16.8981% ( 451) 00:10:13.817 7.968 - 8.019: 25.8725% ( 504) 00:10:13.817 8.019 - 8.071: 33.9209% ( 452) 00:10:13.817 8.071 - 8.122: 43.4295% ( 534) 00:10:13.817 8.122 - 8.173: 57.1759% ( 772) 00:10:13.817 8.173 - 8.225: 67.3077% ( 569) 00:10:13.817 8.225 - 8.276: 73.6467% ( 356) 00:10:13.817 8.276 - 8.328: 77.4751% ( 215) 00:10:13.817 8.328 - 8.379: 80.7870% ( 186) 00:10:13.817 8.379 - 8.431: 83.3155% ( 142) 00:10:13.817 8.431 - 8.482: 85.2564% ( 109) 00:10:13.817 8.482 - 8.533: 86.5385% ( 72) 00:10:13.817 8.533 - 8.585: 87.9095% ( 77) 00:10:13.817 8.585 - 8.636: 89.1916% ( 72) 00:10:13.817 8.636 - 8.688: 90.1531% ( 54) 00:10:13.817 8.688 - 8.739: 91.0256% ( 49) 00:10:13.817 8.739 - 8.790: 91.8803% ( 48) 00:10:13.817 8.790 - 8.842: 92.5926% ( 40) 00:10:13.817 8.842 - 8.893: 93.3048% ( 40) 00:10:13.817 8.893 - 8.945: 93.9103% ( 34) 00:10:13.817 8.945 - 8.996: 94.4801% ( 32) 00:10:13.817 8.996 - 9.047: 94.8184% ( 19) 00:10:13.817 9.047 - 9.099: 95.3348% ( 29) 00:10:13.817 9.099 - 9.150: 95.5840% ( 14) 00:10:13.817 9.150 - 9.202: 95.9046% ( 18) 00:10:13.817 9.202 - 9.253: 96.1004% ( 11) 00:10:13.817 9.253 - 9.304: 96.2785% ( 10) 00:10:13.817 9.304 - 9.356: 96.3497% ( 4) 00:10:13.817 9.356 - 9.407: 96.4922% ( 8) 00:10:13.817 9.407 - 9.459: 96.5990% ( 6) 00:10:13.817 9.459 - 9.510: 96.6702% ( 4) 00:10:13.817 9.510 - 9.561: 96.6880% ( 1) 00:10:13.817 9.561 - 9.613: 96.7771% ( 5) 00:10:13.817 9.664 - 9.716: 96.8305% ( 3) 00:10:13.817 9.716 - 9.767: 96.8483% ( 1) 00:10:13.817 9.767 - 9.818: 96.8661% ( 1) 00:10:13.817 9.921 - 9.973: 96.8839% ( 1) 00:10:13.817 9.973 - 10.024: 96.9017% ( 1) 00:10:13.817 10.178 - 10.230: 96.9195% ( 1) 00:10:13.817 10.949 - 11.001: 96.9373% ( 1) 00:10:13.817 11.206 - 11.258: 96.9551% ( 1) 00:10:13.817 11.463 - 11.515: 96.9729% ( 1) 00:10:13.817 12.132 - 12.183: 96.9907% ( 1) 00:10:13.817 12.337 - 12.389: 97.0085% ( 1) 00:10:13.817 12.440 - 12.492: 97.0264% ( 1) 00:10:13.817 12.492 - 12.543: 97.0442% ( 1) 00:10:13.817 13.006 - 13.057: 97.0620% ( 1) 00:10:13.817 13.057 - 13.108: 97.0798% ( 1) 00:10:13.817 13.108 - 13.160: 97.0976% ( 1) 00:10:13.817 13.160 - 13.263: 97.1154% ( 1) 00:10:13.817 13.263 - 13.365: 97.1866% ( 4) 00:10:13.817 13.365 - 13.468: 97.2756% ( 5) 00:10:13.817 13.468 - 13.571: 97.3825% ( 6) 00:10:13.817 13.571 - 13.674: 97.4715% ( 5) 00:10:13.817 13.674 - 13.777: 97.5605% ( 5) 00:10:13.817 13.777 - 13.880: 97.5783% ( 1) 00:10:13.817 13.880 - 13.982: 97.7742% ( 11) 00:10:13.817 13.982 - 14.085: 97.8811% ( 6) 00:10:13.817 14.085 - 14.188: 97.9879% ( 6) 00:10:13.817 14.188 - 14.291: 98.0591% ( 4) 00:10:13.817 14.291 - 14.394: 98.1481% ( 5) 00:10:13.817 14.394 - 14.496: 98.2372% ( 5) 00:10:13.817 14.496 - 14.599: 98.2550% ( 1) 00:10:13.817 14.599 - 14.702: 98.2728% ( 1) 00:10:13.817 14.702 - 14.805: 98.3440% ( 4) 00:10:13.817 14.805 - 14.908: 98.3974% ( 3) 00:10:13.817 14.908 - 15.010: 98.4330% ( 2) 00:10:13.817 15.010 - 15.113: 98.4687% ( 2) 00:10:13.817 15.216 - 15.319: 98.4865% ( 1) 00:10:13.817 15.319 - 15.422: 98.5043% ( 1) 00:10:13.817 15.422 - 15.524: 98.5399% ( 2) 00:10:13.817 15.524 - 15.627: 98.5755% ( 2) 00:10:13.817 15.627 - 15.730: 98.5933% ( 1) 00:10:13.817 15.936 - 16.039: 98.6111% ( 1) 00:10:13.817 16.039 - 16.141: 98.6467% ( 2) 00:10:13.817 16.141 - 16.244: 98.6645% ( 1) 00:10:13.817 17.581 - 17.684: 98.6823% ( 1) 00:10:13.817 17.889 - 17.992: 98.7001% ( 1) 00:10:13.817 17.992 - 18.095: 98.7358% ( 2) 00:10:13.817 18.095 - 18.198: 98.7714% ( 2) 00:10:13.817 18.198 - 18.300: 98.8248% ( 3) 00:10:13.817 18.300 - 18.403: 98.9850% ( 9) 00:10:13.817 18.403 - 18.506: 99.2521% ( 15) 00:10:13.817 18.506 - 18.609: 99.4124% ( 9) 00:10:13.817 18.609 - 18.712: 99.4658% ( 3) 00:10:13.817 18.712 - 18.814: 99.5370% ( 4) 00:10:13.817 18.814 - 18.917: 99.5548% ( 1) 00:10:13.817 18.917 - 19.020: 99.5905% ( 2) 00:10:13.817 19.020 - 19.123: 99.6083% ( 1) 00:10:13.817 19.123 - 19.226: 99.6261% ( 1) 00:10:13.817 19.226 - 19.329: 99.6439% ( 1) 00:10:13.817 19.534 - 19.637: 99.6795% ( 2) 00:10:13.817 20.048 - 20.151: 99.6973% ( 1) 00:10:13.817 20.562 - 20.665: 99.7151% ( 1) 00:10:13.817 23.852 - 23.955: 99.7507% ( 2) 00:10:13.817 23.955 - 24.058: 99.7863% ( 2) 00:10:13.817 24.058 - 24.161: 99.8219% ( 2) 00:10:13.817 24.263 - 24.366: 99.8397% ( 1) 00:10:13.817 24.469 - 24.572: 99.8575% ( 1) 00:10:13.817 25.086 - 25.189: 99.8754% ( 1) 00:10:13.817 35.984 - 36.190: 99.8932% ( 1) 00:10:13.817 40.096 - 40.302: 99.9110% ( 1) 00:10:13.817 41.124 - 41.330: 99.9288% ( 1) 00:10:13.817 47.910 - 48.116: 99.9466% ( 1) 00:10:13.817 52.434 - 52.639: 99.9644% ( 1) 00:10:13.817 76.903 - 77.314: 99.9822% ( 1) 00:10:13.817 84.305 - 84.716: 100.0000% ( 1) 00:10:13.817 00:10:13.817 ************************************ 00:10:13.817 END TEST nvme_overhead 00:10:13.817 ************************************ 00:10:13.817 00:10:13.817 real 0m1.308s 00:10:13.817 user 0m1.096s 00:10:13.817 sys 0m0.162s 00:10:13.818 08:30:48 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.818 08:30:48 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 08:30:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:13.818 08:30:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:13.818 08:30:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.818 08:30:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 ************************************ 00:10:13.818 START TEST nvme_arbitration 00:10:13.818 ************************************ 00:10:13.818 08:30:48 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:18.011 Initializing NVMe Controllers 00:10:18.011 Attached to 0000:00:10.0 00:10:18.011 Attached to 0000:00:11.0 00:10:18.011 Attached to 0000:00:13.0 00:10:18.011 Attached to 0000:00:12.0 00:10:18.011 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:18.011 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:18.011 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:18.011 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:18.011 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:18.011 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:18.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:18.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:18.012 Initialization complete. Launching workers. 00:10:18.012 Starting thread on core 2 with urgent priority queue 00:10:18.012 Starting thread on core 1 with urgent priority queue 00:10:18.012 Starting thread on core 3 with urgent priority queue 00:10:18.012 Starting thread on core 0 with urgent priority queue 00:10:18.012 QEMU NVMe Ctrl (12340 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:10:18.012 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:10:18.012 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:10:18.012 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:10:18.012 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:10:18.012 QEMU NVMe Ctrl (12342 ) core 3: 384.00 IO/s 260.42 secs/100000 ios 00:10:18.012 ======================================================== 00:10:18.012 00:10:18.012 00:10:18.012 real 0m3.464s 00:10:18.012 user 0m9.469s 00:10:18.012 sys 0m0.173s 00:10:18.012 08:30:52 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.012 08:30:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 ************************************ 00:10:18.012 END TEST nvme_arbitration 00:10:18.012 ************************************ 00:10:18.012 08:30:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 ************************************ 00:10:18.012 START TEST nvme_single_aen 00:10:18.012 ************************************ 00:10:18.012 08:30:52 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:18.012 Asynchronous Event Request test 00:10:18.012 Attached to 0000:00:10.0 00:10:18.012 Attached to 0000:00:11.0 00:10:18.012 Attached to 0000:00:13.0 00:10:18.012 Attached to 0000:00:12.0 00:10:18.012 Reset controller to setup AER completions for this process 00:10:18.012 Registering asynchronous event callbacks... 00:10:18.012 Getting orig temperature thresholds of all controllers 00:10:18.012 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.012 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.012 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.012 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.012 Setting all controllers temperature threshold low to trigger AER 00:10:18.012 Waiting for all controllers temperature threshold to be set lower 00:10:18.012 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.012 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:18.012 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.012 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:18.012 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.012 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:18.012 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.012 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:18.012 Waiting for all controllers to trigger AER and reset threshold 00:10:18.012 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.012 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.012 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.012 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.012 Cleaning up... 00:10:18.012 00:10:18.012 real 0m0.303s 00:10:18.012 user 0m0.105s 00:10:18.012 sys 0m0.155s 00:10:18.012 08:30:52 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.012 ************************************ 00:10:18.012 END TEST nvme_single_aen 00:10:18.012 ************************************ 00:10:18.012 08:30:52 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 08:30:52 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.012 08:30:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 ************************************ 00:10:18.012 START TEST nvme_doorbell_aers 00:10:18.012 ************************************ 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:18.012 08:30:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:18.271 [2024-11-22 08:30:53.117062] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:28.251 Executing: test_write_invalid_db 00:10:28.251 Waiting for AER completion... 00:10:28.251 Failure: test_write_invalid_db 00:10:28.251 00:10:28.251 Executing: test_invalid_db_write_overflow_sq 00:10:28.251 Waiting for AER completion... 00:10:28.251 Failure: test_invalid_db_write_overflow_sq 00:10:28.251 00:10:28.251 Executing: test_invalid_db_write_overflow_cq 00:10:28.251 Waiting for AER completion... 00:10:28.251 Failure: test_invalid_db_write_overflow_cq 00:10:28.251 00:10:28.251 08:31:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:28.251 08:31:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:28.251 [2024-11-22 08:31:03.163557] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:38.231 Executing: test_write_invalid_db 00:10:38.231 Waiting for AER completion... 00:10:38.231 Failure: test_write_invalid_db 00:10:38.231 00:10:38.231 Executing: test_invalid_db_write_overflow_sq 00:10:38.231 Waiting for AER completion... 00:10:38.231 Failure: test_invalid_db_write_overflow_sq 00:10:38.231 00:10:38.231 Executing: test_invalid_db_write_overflow_cq 00:10:38.231 Waiting for AER completion... 00:10:38.231 Failure: test_invalid_db_write_overflow_cq 00:10:38.231 00:10:38.231 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:38.231 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:38.231 [2024-11-22 08:31:13.234565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:48.209 Executing: test_write_invalid_db 00:10:48.210 Waiting for AER completion... 00:10:48.210 Failure: test_write_invalid_db 00:10:48.210 00:10:48.210 Executing: test_invalid_db_write_overflow_sq 00:10:48.210 Waiting for AER completion... 00:10:48.210 Failure: test_invalid_db_write_overflow_sq 00:10:48.210 00:10:48.210 Executing: test_invalid_db_write_overflow_cq 00:10:48.210 Waiting for AER completion... 00:10:48.210 Failure: test_invalid_db_write_overflow_cq 00:10:48.210 00:10:48.210 08:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:48.210 08:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:48.210 [2024-11-22 08:31:23.289342] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.191 Executing: test_write_invalid_db 00:10:58.191 Waiting for AER completion... 00:10:58.191 Failure: test_write_invalid_db 00:10:58.191 00:10:58.191 Executing: test_invalid_db_write_overflow_sq 00:10:58.191 Waiting for AER completion... 00:10:58.191 Failure: test_invalid_db_write_overflow_sq 00:10:58.191 00:10:58.191 Executing: test_invalid_db_write_overflow_cq 00:10:58.191 Waiting for AER completion... 00:10:58.191 Failure: test_invalid_db_write_overflow_cq 00:10:58.191 00:10:58.191 ************************************ 00:10:58.191 END TEST nvme_doorbell_aers 00:10:58.191 ************************************ 00:10:58.191 00:10:58.191 real 0m40.341s 00:10:58.191 user 0m28.446s 00:10:58.191 sys 0m11.514s 00:10:58.191 08:31:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.191 08:31:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:58.191 08:31:33 nvme -- nvme/nvme.sh@97 -- # uname 00:10:58.191 08:31:33 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:58.191 08:31:33 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:58.191 08:31:33 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:58.191 08:31:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.191 08:31:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.191 ************************************ 00:10:58.191 START TEST nvme_multi_aen 00:10:58.191 ************************************ 00:10:58.191 08:31:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:58.451 [2024-11-22 08:31:33.368363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.368447] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.368464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.370669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.370854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.370874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.372489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.372531] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.372549] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.374260] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.374297] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 [2024-11-22 08:31:33.374311] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64470) is not found. Dropping the request. 00:10:58.451 Child process pid: 64985 00:10:58.710 [Child] Asynchronous Event Request test 00:10:58.710 [Child] Attached to 0000:00:10.0 00:10:58.710 [Child] Attached to 0000:00:11.0 00:10:58.710 [Child] Attached to 0000:00:13.0 00:10:58.710 [Child] Attached to 0000:00:12.0 00:10:58.710 [Child] Registering asynchronous event callbacks... 00:10:58.710 [Child] Getting orig temperature thresholds of all controllers 00:10:58.710 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.710 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.710 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.710 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.711 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:58.711 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 [Child] Cleaning up... 00:10:58.711 Asynchronous Event Request test 00:10:58.711 Attached to 0000:00:10.0 00:10:58.711 Attached to 0000:00:11.0 00:10:58.711 Attached to 0000:00:13.0 00:10:58.711 Attached to 0000:00:12.0 00:10:58.711 Reset controller to setup AER completions for this process 00:10:58.711 Registering asynchronous event callbacks... 00:10:58.711 Getting orig temperature thresholds of all controllers 00:10:58.711 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.711 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.711 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.711 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:58.711 Setting all controllers temperature threshold low to trigger AER 00:10:58.711 Waiting for all controllers temperature threshold to be set lower 00:10:58.711 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:58.711 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:58.711 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:58.711 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:58.711 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:58.711 Waiting for all controllers to trigger AER and reset threshold 00:10:58.711 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:58.711 Cleaning up... 00:10:58.711 00:10:58.711 real 0m0.635s 00:10:58.711 user 0m0.215s 00:10:58.711 sys 0m0.305s 00:10:58.711 08:31:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.711 08:31:33 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:58.711 ************************************ 00:10:58.711 END TEST nvme_multi_aen 00:10:58.711 ************************************ 00:10:58.970 08:31:33 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:58.970 08:31:33 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.970 08:31:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.970 08:31:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.970 ************************************ 00:10:58.970 START TEST nvme_startup 00:10:58.970 ************************************ 00:10:58.970 08:31:33 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:59.229 Initializing NVMe Controllers 00:10:59.229 Attached to 0000:00:10.0 00:10:59.229 Attached to 0000:00:11.0 00:10:59.229 Attached to 0000:00:13.0 00:10:59.229 Attached to 0000:00:12.0 00:10:59.229 Initialization complete. 00:10:59.229 Time used:246938.797 (us). 00:10:59.229 ************************************ 00:10:59.229 END TEST nvme_startup 00:10:59.229 ************************************ 00:10:59.229 00:10:59.229 real 0m0.349s 00:10:59.229 user 0m0.111s 00:10:59.229 sys 0m0.186s 00:10:59.229 08:31:34 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.229 08:31:34 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 08:31:34 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:59.229 08:31:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.229 08:31:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.229 08:31:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.229 ************************************ 00:10:59.229 START TEST nvme_multi_secondary 00:10:59.229 ************************************ 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65041 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65042 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:59.229 08:31:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:03.424 Initializing NVMe Controllers 00:11:03.424 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:03.424 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:03.424 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:03.424 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:03.424 Initialization complete. Launching workers. 00:11:03.424 ======================================================== 00:11:03.424 Latency(us) 00:11:03.424 Device Information : IOPS MiB/s Average min max 00:11:03.424 PCIE (0000:00:10.0) NSID 1 from core 2: 2994.26 11.70 5340.69 1156.08 13403.42 00:11:03.424 PCIE (0000:00:11.0) NSID 1 from core 2: 2994.26 11.70 5342.94 1271.26 12888.39 00:11:03.424 PCIE (0000:00:13.0) NSID 1 from core 2: 2994.26 11.70 5342.90 1188.20 13000.50 00:11:03.424 PCIE (0000:00:12.0) NSID 1 from core 2: 2994.26 11.70 5342.84 1228.36 13514.03 00:11:03.424 PCIE (0000:00:12.0) NSID 2 from core 2: 2994.26 11.70 5342.34 1209.98 13173.87 00:11:03.424 PCIE (0000:00:12.0) NSID 3 from core 2: 2994.26 11.70 5342.98 1232.17 12580.57 00:11:03.424 ======================================================== 00:11:03.424 Total : 17965.56 70.18 5342.45 1156.08 13514.03 00:11:03.424 00:11:03.424 Initializing NVMe Controllers 00:11:03.424 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:03.424 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:03.424 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:03.424 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:03.424 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:03.424 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:03.424 Initialization complete. Launching workers. 00:11:03.424 ======================================================== 00:11:03.424 Latency(us) 00:11:03.424 Device Information : IOPS MiB/s Average min max 00:11:03.424 PCIE (0000:00:10.0) NSID 1 from core 1: 5021.58 19.62 3183.75 1341.28 10866.15 00:11:03.424 PCIE (0000:00:11.0) NSID 1 from core 1: 5021.58 19.62 3185.66 1262.19 11217.96 00:11:03.424 PCIE (0000:00:13.0) NSID 1 from core 1: 5021.58 19.62 3185.66 1346.27 11838.21 00:11:03.424 PCIE (0000:00:12.0) NSID 1 from core 1: 5021.58 19.62 3185.83 1372.33 11155.45 00:11:03.424 PCIE (0000:00:12.0) NSID 2 from core 1: 5021.58 19.62 3185.82 1336.38 10833.58 00:11:03.424 PCIE (0000:00:12.0) NSID 3 from core 1: 5021.58 19.62 3185.84 1441.31 10899.88 00:11:03.424 ======================================================== 00:11:03.424 Total : 30129.48 117.69 3185.43 1262.19 11838.21 00:11:03.424 00:11:03.424 08:31:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65041 00:11:04.485 Initializing NVMe Controllers 00:11:04.485 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:04.485 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:04.485 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:04.485 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:04.485 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:04.485 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:04.485 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:04.485 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:04.485 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:04.485 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:04.485 Initialization complete. Launching workers. 00:11:04.485 ======================================================== 00:11:04.485 Latency(us) 00:11:04.485 Device Information : IOPS MiB/s Average min max 00:11:04.485 PCIE (0000:00:10.0) NSID 1 from core 0: 7932.74 30.99 2015.52 1021.66 6081.67 00:11:04.485 PCIE (0000:00:11.0) NSID 1 from core 0: 7932.74 30.99 2016.50 946.58 6556.40 00:11:04.485 PCIE (0000:00:13.0) NSID 1 from core 0: 7932.74 30.99 2016.46 942.56 7212.71 00:11:04.485 PCIE (0000:00:12.0) NSID 1 from core 0: 7932.74 30.99 2016.42 890.36 7602.17 00:11:04.485 PCIE (0000:00:12.0) NSID 2 from core 0: 7932.74 30.99 2016.39 804.03 7945.89 00:11:04.485 PCIE (0000:00:12.0) NSID 3 from core 0: 7935.94 31.00 2015.54 751.61 6386.27 00:11:04.485 ======================================================== 00:11:04.485 Total : 47599.67 185.94 2016.14 751.61 7945.89 00:11:04.485 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65042 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65112 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65113 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:04.745 08:31:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:08.036 Initializing NVMe Controllers 00:11:08.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:08.036 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:08.036 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:08.036 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:08.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:08.036 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:08.036 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:08.036 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:08.036 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:08.036 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:08.036 Initialization complete. Launching workers. 00:11:08.036 ======================================================== 00:11:08.037 Latency(us) 00:11:08.037 Device Information : IOPS MiB/s Average min max 00:11:08.037 PCIE (0000:00:10.0) NSID 1 from core 1: 5039.53 19.69 3172.58 920.11 13045.25 00:11:08.037 PCIE (0000:00:11.0) NSID 1 from core 1: 5039.53 19.69 3174.44 945.84 12947.47 00:11:08.037 PCIE (0000:00:13.0) NSID 1 from core 1: 5039.53 19.69 3174.64 962.51 12899.77 00:11:08.037 PCIE (0000:00:12.0) NSID 1 from core 1: 5039.53 19.69 3175.20 941.18 12588.66 00:11:08.037 PCIE (0000:00:12.0) NSID 2 from core 1: 5039.53 19.69 3175.33 946.51 13172.23 00:11:08.037 PCIE (0000:00:12.0) NSID 3 from core 1: 5044.86 19.71 3172.13 937.84 13041.64 00:11:08.037 ======================================================== 00:11:08.037 Total : 30242.49 118.13 3174.05 920.11 13172.23 00:11:08.037 00:11:08.037 Initializing NVMe Controllers 00:11:08.037 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:08.037 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:08.037 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:08.037 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:08.037 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:08.037 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:08.037 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:08.037 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:08.037 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:08.037 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:08.037 Initialization complete. Launching workers. 00:11:08.037 ======================================================== 00:11:08.037 Latency(us) 00:11:08.037 Device Information : IOPS MiB/s Average min max 00:11:08.037 PCIE (0000:00:10.0) NSID 1 from core 0: 5064.92 19.78 3156.63 994.18 6390.84 00:11:08.037 PCIE (0000:00:11.0) NSID 1 from core 0: 5064.92 19.78 3158.24 1020.68 6427.47 00:11:08.037 PCIE (0000:00:13.0) NSID 1 from core 0: 5064.92 19.78 3158.11 1032.15 6112.36 00:11:08.037 PCIE (0000:00:12.0) NSID 1 from core 0: 5064.92 19.78 3158.11 1042.00 6764.23 00:11:08.037 PCIE (0000:00:12.0) NSID 2 from core 0: 5064.92 19.78 3158.06 1035.61 7135.45 00:11:08.037 PCIE (0000:00:12.0) NSID 3 from core 0: 5064.92 19.78 3157.92 1027.34 6538.08 00:11:08.037 ======================================================== 00:11:08.037 Total : 30389.54 118.71 3157.85 994.18 7135.45 00:11:08.037 00:11:10.572 Initializing NVMe Controllers 00:11:10.572 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:10.572 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:10.572 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:10.572 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:10.572 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:10.572 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:10.572 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:10.572 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:10.572 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:10.572 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:10.572 Initialization complete. Launching workers. 00:11:10.572 ======================================================== 00:11:10.572 Latency(us) 00:11:10.572 Device Information : IOPS MiB/s Average min max 00:11:10.572 PCIE (0000:00:10.0) NSID 1 from core 2: 3397.22 13.27 4708.44 931.99 11703.77 00:11:10.572 PCIE (0000:00:11.0) NSID 1 from core 2: 3397.22 13.27 4709.28 955.71 11841.01 00:11:10.572 PCIE (0000:00:13.0) NSID 1 from core 2: 3397.22 13.27 4709.43 964.63 11017.64 00:11:10.572 PCIE (0000:00:12.0) NSID 1 from core 2: 3397.22 13.27 4709.36 985.91 10835.30 00:11:10.572 PCIE (0000:00:12.0) NSID 2 from core 2: 3397.22 13.27 4709.29 973.80 11586.46 00:11:10.572 PCIE (0000:00:12.0) NSID 3 from core 2: 3397.22 13.27 4709.24 957.06 11739.11 00:11:10.572 ======================================================== 00:11:10.572 Total : 20383.31 79.62 4709.17 931.99 11841.01 00:11:10.572 00:11:10.572 ************************************ 00:11:10.572 END TEST nvme_multi_secondary 00:11:10.572 ************************************ 00:11:10.572 08:31:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65112 00:11:10.572 08:31:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65113 00:11:10.572 00:11:10.572 real 0m11.006s 00:11:10.572 user 0m18.565s 00:11:10.572 sys 0m1.056s 00:11:10.572 08:31:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.572 08:31:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:10.572 08:31:45 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:10.572 08:31:45 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64045 ]] 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1094 -- # kill 64045 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1095 -- # wait 64045 00:11:10.572 [2024-11-22 08:31:45.313203] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.313342] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.313423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.313478] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.319844] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.319924] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.319972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.320008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.324896] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.324990] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.325023] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.325058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.329588] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.329646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.329668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 [2024-11-22 08:31:45.329691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:11:10.572 08:31:45 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.572 08:31:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:10.572 ************************************ 00:11:10.572 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:10.572 ************************************ 00:11:10.572 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:10.572 * Looking for test storage... 00:11:10.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.833 --rc genhtml_branch_coverage=1 00:11:10.833 --rc genhtml_function_coverage=1 00:11:10.833 --rc genhtml_legend=1 00:11:10.833 --rc geninfo_all_blocks=1 00:11:10.833 --rc geninfo_unexecuted_blocks=1 00:11:10.833 00:11:10.833 ' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.833 --rc genhtml_branch_coverage=1 00:11:10.833 --rc genhtml_function_coverage=1 00:11:10.833 --rc genhtml_legend=1 00:11:10.833 --rc geninfo_all_blocks=1 00:11:10.833 --rc geninfo_unexecuted_blocks=1 00:11:10.833 00:11:10.833 ' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.833 --rc genhtml_branch_coverage=1 00:11:10.833 --rc genhtml_function_coverage=1 00:11:10.833 --rc genhtml_legend=1 00:11:10.833 --rc geninfo_all_blocks=1 00:11:10.833 --rc geninfo_unexecuted_blocks=1 00:11:10.833 00:11:10.833 ' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.833 --rc genhtml_branch_coverage=1 00:11:10.833 --rc genhtml_function_coverage=1 00:11:10.833 --rc genhtml_legend=1 00:11:10.833 --rc geninfo_all_blocks=1 00:11:10.833 --rc geninfo_unexecuted_blocks=1 00:11:10.833 00:11:10.833 ' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65279 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65279 00:11:10.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65279 ']' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.833 08:31:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:11.093 [2024-11-22 08:31:45.978457] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:11:11.093 [2024-11-22 08:31:45.978569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65279 ] 00:11:11.352 [2024-11-22 08:31:46.177729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.352 [2024-11-22 08:31:46.319773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.352 [2024-11-22 08:31:46.319952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.352 [2024-11-22 08:31:46.320149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.352 [2024-11-22 08:31:46.320193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.288 nvme0n1 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.288 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eSGGK.txt 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.548 true 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732264307 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65308 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:12.548 08:31:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:14.454 [2024-11-22 08:31:49.403700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:14.454 [2024-11-22 08:31:49.404202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:14.454 [2024-11-22 08:31:49.404747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:14.454 [2024-11-22 08:31:49.404972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.454 [2024-11-22 08:31:49.407466] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65308 00:11:14.454 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65308 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65308 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eSGGK.txt 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:14.454 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eSGGK.txt 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65279 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65279 ']' 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65279 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65279 00:11:14.714 killing process with pid 65279 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65279' 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65279 00:11:14.714 08:31:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65279 00:11:17.251 08:31:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:17.251 08:31:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:17.251 00:11:17.251 real 0m6.596s 00:11:17.251 user 0m22.774s 00:11:17.252 sys 0m0.940s 00:11:17.252 ************************************ 00:11:17.252 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:17.252 ************************************ 00:11:17.252 08:31:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.252 08:31:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:17.252 08:31:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:17.252 08:31:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:17.252 08:31:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.252 08:31:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.252 08:31:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.252 ************************************ 00:11:17.252 START TEST nvme_fio 00:11:17.252 ************************************ 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:17.252 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:17.252 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:17.821 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:17.821 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:18.081 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:18.081 08:31:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:18.081 08:31:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:18.081 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:18.081 fio-3.35 00:11:18.081 Starting 1 thread 00:11:22.273 00:11:22.273 test: (groupid=0, jobs=1): err= 0: pid=65467: Fri Nov 22 08:31:56 2024 00:11:22.273 read: IOPS=23.7k, BW=92.8MiB/s (97.3MB/s)(186MiB/2001msec) 00:11:22.273 slat (nsec): min=3691, max=78365, avg=4178.69, stdev=1041.41 00:11:22.273 clat (usec): min=188, max=9584, avg=2687.31, stdev=336.72 00:11:22.273 lat (usec): min=192, max=9663, avg=2691.49, stdev=337.01 00:11:22.273 clat percentiles (usec): 00:11:22.273 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:11:22.273 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:11:22.273 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3032], 00:11:22.273 | 99.00th=[ 3949], 99.50th=[ 4293], 99.90th=[ 5669], 99.95th=[ 7242], 00:11:22.273 | 99.99th=[ 9241] 00:11:22.273 bw ( KiB/s): min=90608, max=97616, per=99.18%, avg=94216.00, stdev=3508.63, samples=3 00:11:22.273 iops : min=22652, max=24404, avg=23554.00, stdev=877.16, samples=3 00:11:22.273 write: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(185MiB/2001msec); 0 zone resets 00:11:22.273 slat (nsec): min=3785, max=31347, avg=4360.48, stdev=911.01 00:11:22.273 clat (usec): min=238, max=9335, avg=2693.98, stdev=339.29 00:11:22.273 lat (usec): min=242, max=9351, avg=2698.34, stdev=339.56 00:11:22.273 clat percentiles (usec): 00:11:22.273 | 1.00th=[ 2008], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:11:22.273 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2704], 00:11:22.273 | 70.00th=[ 2769], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3064], 00:11:22.273 | 99.00th=[ 3949], 99.50th=[ 4293], 99.90th=[ 5932], 99.95th=[ 7504], 00:11:22.273 | 99.99th=[ 9110] 00:11:22.273 bw ( KiB/s): min=91416, max=96952, per=99.79%, avg=94224.00, stdev=2768.87, samples=3 00:11:22.273 iops : min=22854, max=24238, avg=23556.00, stdev=692.22, samples=3 00:11:22.273 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:22.273 lat (msec) : 2=0.91%, 4=98.15%, 10=0.90% 00:11:22.273 cpu : usr=99.30%, sys=0.10%, ctx=2, majf=0, minf=607 00:11:22.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:22.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.273 issued rwts: total=47521,47234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.273 00:11:22.273 Run status group 0 (all jobs): 00:11:22.273 READ: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=186MiB (195MB), run=2001-2001msec 00:11:22.273 WRITE: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=185MiB (193MB), run=2001-2001msec 00:11:22.273 ----------------------------------------------------- 00:11:22.273 Suppressions used: 00:11:22.273 count bytes template 00:11:22.273 1 32 /usr/src/fio/parse.c 00:11:22.273 1 8 libtcmalloc_minimal.so 00:11:22.273 ----------------------------------------------------- 00:11:22.273 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:22.273 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:22.531 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:22.531 08:31:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:22.531 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:22.532 08:31:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:22.790 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:22.790 fio-3.35 00:11:22.790 Starting 1 thread 00:11:26.977 00:11:26.977 test: (groupid=0, jobs=1): err= 0: pid=65529: Fri Nov 22 08:32:01 2024 00:11:26.977 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(181MiB/2001msec) 00:11:26.977 slat (usec): min=3, max=110, avg= 4.45, stdev= 1.31 00:11:26.977 clat (usec): min=231, max=10437, avg=2763.27, stdev=527.95 00:11:26.977 lat (usec): min=236, max=10547, avg=2767.72, stdev=528.74 00:11:26.977 clat percentiles (usec): 00:11:26.977 | 1.00th=[ 2376], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:11:26.977 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:26.977 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 2999], 00:11:26.977 | 99.00th=[ 5145], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 8848], 00:11:26.977 | 99.99th=[10290] 00:11:26.977 bw ( KiB/s): min=88832, max=91832, per=98.25%, avg=90754.67, stdev=1669.11, samples=3 00:11:26.977 iops : min=22208, max=22958, avg=22688.67, stdev=417.28, samples=3 00:11:26.977 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(179MiB/2001msec); 0 zone resets 00:11:26.977 slat (nsec): min=3872, max=49213, avg=4656.71, stdev=1242.55 00:11:26.977 clat (usec): min=239, max=10337, avg=2768.94, stdev=522.61 00:11:26.977 lat (usec): min=243, max=10358, avg=2773.60, stdev=523.36 00:11:26.977 clat percentiles (usec): 00:11:26.977 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:11:26.977 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:26.977 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 3032], 00:11:26.977 | 99.00th=[ 5211], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 8717], 00:11:26.977 | 99.99th=[10159] 00:11:26.977 bw ( KiB/s): min=90752, max=91280, per=99.01%, avg=90946.67, stdev=290.03, samples=3 00:11:26.977 iops : min=22688, max=22820, avg=22737.33, stdev=72.04, samples=3 00:11:26.977 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:26.977 lat (msec) : 2=0.43%, 4=97.84%, 10=1.69%, 20=0.01% 00:11:26.977 cpu : usr=99.35%, sys=0.05%, ctx=2, majf=0, minf=608 00:11:26.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:26.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.977 issued rwts: total=46210,45951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.977 00:11:26.977 Run status group 0 (all jobs): 00:11:26.977 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:11:26.978 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=179MiB (188MB), run=2001-2001msec 00:11:26.978 ----------------------------------------------------- 00:11:26.978 Suppressions used: 00:11:26.978 count bytes template 00:11:26.978 1 32 /usr/src/fio/parse.c 00:11:26.978 1 8 libtcmalloc_minimal.so 00:11:26.978 ----------------------------------------------------- 00:11:26.978 00:11:26.978 08:32:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:26.978 08:32:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:26.978 08:32:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:26.978 08:32:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:27.237 08:32:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:27.237 08:32:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:27.496 08:32:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:27.496 08:32:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:27.496 08:32:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:27.755 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:27.755 fio-3.35 00:11:27.755 Starting 1 thread 00:11:31.948 00:11:31.948 test: (groupid=0, jobs=1): err= 0: pid=65590: Fri Nov 22 08:32:06 2024 00:11:31.948 read: IOPS=23.6k, BW=92.4MiB/s (96.8MB/s)(185MiB/2001msec) 00:11:31.948 slat (usec): min=3, max=140, avg= 4.33, stdev= 1.35 00:11:31.948 clat (usec): min=183, max=11536, avg=2699.26, stdev=322.77 00:11:31.948 lat (usec): min=187, max=11591, avg=2703.59, stdev=323.18 00:11:31.948 clat percentiles (usec): 00:11:31.948 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:11:31.948 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:11:31.948 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 2999], 00:11:31.948 | 99.00th=[ 3556], 99.50th=[ 4293], 99.90th=[ 6128], 99.95th=[ 8848], 00:11:31.948 | 99.99th=[11338] 00:11:31.948 bw ( KiB/s): min=92624, max=95160, per=99.00%, avg=93629.33, stdev=1347.15, samples=3 00:11:31.948 iops : min=23156, max=23790, avg=23407.33, stdev=336.79, samples=3 00:11:31.948 write: IOPS=23.5k, BW=91.8MiB/s (96.2MB/s)(184MiB/2001msec); 0 zone resets 00:11:31.948 slat (usec): min=3, max=147, avg= 4.53, stdev= 1.52 00:11:31.948 clat (usec): min=251, max=11414, avg=2706.66, stdev=334.68 00:11:31.948 lat (usec): min=255, max=11435, avg=2711.19, stdev=335.05 00:11:31.948 clat percentiles (usec): 00:11:31.948 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:11:31.948 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:11:31.948 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 2999], 00:11:31.948 | 99.00th=[ 3687], 99.50th=[ 4359], 99.90th=[ 6456], 99.95th=[ 9110], 00:11:31.948 | 99.99th=[11076] 00:11:31.948 bw ( KiB/s): min=91704, max=96568, per=99.70%, avg=93690.67, stdev=2551.39, samples=3 00:11:31.948 iops : min=22926, max=24142, avg=23422.67, stdev=637.85, samples=3 00:11:31.948 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:31.948 lat (msec) : 2=0.61%, 4=98.63%, 10=0.68%, 20=0.03% 00:11:31.948 cpu : usr=98.80%, sys=0.30%, ctx=15, majf=0, minf=607 00:11:31.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:31.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.948 issued rwts: total=47311,47008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.948 00:11:31.948 Run status group 0 (all jobs): 00:11:31.948 READ: bw=92.4MiB/s (96.8MB/s), 92.4MiB/s-92.4MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:11:31.948 WRITE: bw=91.8MiB/s (96.2MB/s), 91.8MiB/s-91.8MiB/s (96.2MB/s-96.2MB/s), io=184MiB (193MB), run=2001-2001msec 00:11:31.948 ----------------------------------------------------- 00:11:31.948 Suppressions used: 00:11:31.948 count bytes template 00:11:31.948 1 32 /usr/src/fio/parse.c 00:11:31.948 1 8 libtcmalloc_minimal.so 00:11:31.948 ----------------------------------------------------- 00:11:31.948 00:11:31.948 08:32:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:31.948 08:32:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:31.948 08:32:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:31.948 08:32:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:32.208 08:32:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:32.208 08:32:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:32.467 08:32:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:32.467 08:32:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:32.467 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:32.726 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:32.726 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:32.726 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:32.726 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:32.726 08:32:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:32.726 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:32.726 fio-3.35 00:11:32.726 Starting 1 thread 00:11:39.296 00:11:39.296 test: (groupid=0, jobs=1): err= 0: pid=65656: Fri Nov 22 08:32:13 2024 00:11:39.296 read: IOPS=23.3k, BW=91.0MiB/s (95.5MB/s)(182MiB/2001msec) 00:11:39.296 slat (nsec): min=3738, max=52501, avg=4203.22, stdev=933.69 00:11:39.296 clat (usec): min=599, max=10482, avg=2743.17, stdev=320.09 00:11:39.296 lat (usec): min=603, max=10534, avg=2747.37, stdev=320.40 00:11:39.296 clat percentiles (usec): 00:11:39.296 | 1.00th=[ 1909], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:11:39.296 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:39.296 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2966], 00:11:39.296 | 99.00th=[ 3851], 99.50th=[ 4424], 99.90th=[ 5473], 99.95th=[ 7635], 00:11:39.296 | 99.99th=[10159] 00:11:39.296 bw ( KiB/s): min=89480, max=96040, per=100.00%, avg=93312.00, stdev=3416.51, samples=3 00:11:39.296 iops : min=22370, max=24010, avg=23328.00, stdev=854.13, samples=3 00:11:39.296 write: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(181MiB/2001msec); 0 zone resets 00:11:39.296 slat (nsec): min=3839, max=42934, avg=4394.53, stdev=931.22 00:11:39.296 clat (usec): min=596, max=10251, avg=2744.84, stdev=322.15 00:11:39.296 lat (usec): min=600, max=10273, avg=2749.23, stdev=322.43 00:11:39.296 clat percentiles (usec): 00:11:39.296 | 1.00th=[ 1942], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:11:39.296 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:39.296 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2966], 00:11:39.296 | 99.00th=[ 3818], 99.50th=[ 4359], 99.90th=[ 5800], 99.95th=[ 8029], 00:11:39.296 | 99.99th=[ 9896] 00:11:39.296 bw ( KiB/s): min=89000, max=97608, per=100.00%, avg=93392.00, stdev=4306.70, samples=3 00:11:39.296 iops : min=22250, max=24402, avg=23348.00, stdev=1076.67, samples=3 00:11:39.296 lat (usec) : 750=0.01%, 1000=0.04% 00:11:39.296 lat (msec) : 2=1.13%, 4=98.05%, 10=0.76%, 20=0.01% 00:11:39.296 cpu : usr=99.50%, sys=0.05%, ctx=3, majf=0, minf=605 00:11:39.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:39.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.296 issued rwts: total=46636,46320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.296 00:11:39.296 Run status group 0 (all jobs): 00:11:39.296 READ: bw=91.0MiB/s (95.5MB/s), 91.0MiB/s-91.0MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:11:39.296 WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:11:39.296 ----------------------------------------------------- 00:11:39.296 Suppressions used: 00:11:39.296 count bytes template 00:11:39.296 1 32 /usr/src/fio/parse.c 00:11:39.296 1 8 libtcmalloc_minimal.so 00:11:39.296 ----------------------------------------------------- 00:11:39.296 00:11:39.296 08:32:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:39.296 08:32:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:39.296 00:11:39.296 real 0m21.446s 00:11:39.296 user 0m16.674s 00:11:39.296 sys 0m4.987s 00:11:39.296 08:32:13 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.296 08:32:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:39.296 ************************************ 00:11:39.296 END TEST nvme_fio 00:11:39.296 ************************************ 00:11:39.296 00:11:39.296 real 1m37.193s 00:11:39.296 user 3m46.123s 00:11:39.296 sys 0m24.733s 00:11:39.296 08:32:13 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.296 ************************************ 00:11:39.296 END TEST nvme 00:11:39.297 ************************************ 00:11:39.297 08:32:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.297 08:32:13 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:39.297 08:32:13 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:39.297 08:32:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.297 08:32:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.297 08:32:13 -- common/autotest_common.sh@10 -- # set +x 00:11:39.297 ************************************ 00:11:39.297 START TEST nvme_scc 00:11:39.297 ************************************ 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:39.297 * Looking for test storage... 00:11:39.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.297 --rc genhtml_branch_coverage=1 00:11:39.297 --rc genhtml_function_coverage=1 00:11:39.297 --rc genhtml_legend=1 00:11:39.297 --rc geninfo_all_blocks=1 00:11:39.297 --rc geninfo_unexecuted_blocks=1 00:11:39.297 00:11:39.297 ' 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.297 --rc genhtml_branch_coverage=1 00:11:39.297 --rc genhtml_function_coverage=1 00:11:39.297 --rc genhtml_legend=1 00:11:39.297 --rc geninfo_all_blocks=1 00:11:39.297 --rc geninfo_unexecuted_blocks=1 00:11:39.297 00:11:39.297 ' 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.297 --rc genhtml_branch_coverage=1 00:11:39.297 --rc genhtml_function_coverage=1 00:11:39.297 --rc genhtml_legend=1 00:11:39.297 --rc geninfo_all_blocks=1 00:11:39.297 --rc geninfo_unexecuted_blocks=1 00:11:39.297 00:11:39.297 ' 00:11:39.297 08:32:13 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.297 --rc genhtml_branch_coverage=1 00:11:39.297 --rc genhtml_function_coverage=1 00:11:39.297 --rc genhtml_legend=1 00:11:39.297 --rc geninfo_all_blocks=1 00:11:39.297 --rc geninfo_unexecuted_blocks=1 00:11:39.297 00:11:39.297 ' 00:11:39.297 08:32:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.297 08:32:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.297 08:32:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.297 08:32:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.297 08:32:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.297 08:32:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:39.297 08:32:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:39.297 08:32:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:39.297 08:32:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:39.297 08:32:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:39.297 08:32:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:39.297 08:32:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:39.297 08:32:14 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:39.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.817 Waiting for block devices as requested 00:11:40.076 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.076 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.390 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.390 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.711 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:45.711 08:32:20 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:45.711 08:32:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:45.711 08:32:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:45.711 08:32:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:45.711 08:32:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.711 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.712 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:45.713 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:45.714 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:45.715 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:45.716 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.717 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:45.718 08:32:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:45.718 08:32:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:45.718 08:32:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:45.718 08:32:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.718 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:45.719 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.720 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:45.721 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.722 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:45.723 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:45.724 08:32:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:45.724 08:32:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:45.724 08:32:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:45.724 08:32:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.724 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.725 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:45.726 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:45.727 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:45.728 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:45.729 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:45.730 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.731 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:45.732 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.733 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.997 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:45.998 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.999 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:46.000 08:32:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:46.000 08:32:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:46.000 08:32:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:46.000 08:32:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.000 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:46.001 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.002 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:46.003 08:32:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:46.003 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:46.004 08:32:20 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:46.004 08:32:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:46.004 08:32:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:46.004 08:32:20 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:46.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.511 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.511 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.511 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.771 08:32:22 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:47.771 08:32:22 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.771 08:32:22 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.771 08:32:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:47.771 ************************************ 00:11:47.771 START TEST nvme_simple_copy 00:11:47.771 ************************************ 00:11:47.771 08:32:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:48.031 Initializing NVMe Controllers 00:11:48.031 Attaching to 0000:00:10.0 00:11:48.031 Controller supports SCC. Attached to 0000:00:10.0 00:11:48.031 Namespace ID: 1 size: 6GB 00:11:48.031 Initialization complete. 00:11:48.031 00:11:48.031 Controller QEMU NVMe Ctrl (12340 ) 00:11:48.031 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:48.031 Namespace Block Size:4096 00:11:48.031 Writing LBAs 0 to 63 with Random Data 00:11:48.031 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:48.031 LBAs matching Written Data: 64 00:11:48.031 00:11:48.031 real 0m0.326s 00:11:48.031 user 0m0.126s 00:11:48.031 sys 0m0.099s 00:11:48.031 08:32:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.031 08:32:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:48.031 ************************************ 00:11:48.031 END TEST nvme_simple_copy 00:11:48.031 ************************************ 00:11:48.031 00:11:48.031 real 0m9.250s 00:11:48.031 user 0m1.618s 00:11:48.031 sys 0m2.680s 00:11:48.031 08:32:23 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.031 08:32:23 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:48.031 ************************************ 00:11:48.031 END TEST nvme_scc 00:11:48.031 ************************************ 00:11:48.031 08:32:23 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:48.031 08:32:23 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:48.031 08:32:23 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:48.031 08:32:23 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:48.031 08:32:23 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:48.031 08:32:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.031 08:32:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.031 08:32:23 -- common/autotest_common.sh@10 -- # set +x 00:11:48.031 ************************************ 00:11:48.031 START TEST nvme_fdp 00:11:48.031 ************************************ 00:11:48.031 08:32:23 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:48.292 * Looking for test storage... 00:11:48.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.292 --rc genhtml_branch_coverage=1 00:11:48.292 --rc genhtml_function_coverage=1 00:11:48.292 --rc genhtml_legend=1 00:11:48.292 --rc geninfo_all_blocks=1 00:11:48.292 --rc geninfo_unexecuted_blocks=1 00:11:48.292 00:11:48.292 ' 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.292 --rc genhtml_branch_coverage=1 00:11:48.292 --rc genhtml_function_coverage=1 00:11:48.292 --rc genhtml_legend=1 00:11:48.292 --rc geninfo_all_blocks=1 00:11:48.292 --rc geninfo_unexecuted_blocks=1 00:11:48.292 00:11:48.292 ' 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.292 --rc genhtml_branch_coverage=1 00:11:48.292 --rc genhtml_function_coverage=1 00:11:48.292 --rc genhtml_legend=1 00:11:48.292 --rc geninfo_all_blocks=1 00:11:48.292 --rc geninfo_unexecuted_blocks=1 00:11:48.292 00:11:48.292 ' 00:11:48.292 08:32:23 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.292 --rc genhtml_branch_coverage=1 00:11:48.292 --rc genhtml_function_coverage=1 00:11:48.292 --rc genhtml_legend=1 00:11:48.292 --rc geninfo_all_blocks=1 00:11:48.292 --rc geninfo_unexecuted_blocks=1 00:11:48.292 00:11:48.292 ' 00:11:48.292 08:32:23 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.292 08:32:23 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.292 08:32:23 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.292 08:32:23 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.292 08:32:23 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.292 08:32:23 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:48.292 08:32:23 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:48.292 08:32:23 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:48.292 08:32:23 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:48.292 08:32:23 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:48.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:49.432 Waiting for block devices as requested 00:11:49.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.432 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.432 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.692 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:54.981 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:54.981 08:32:29 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:54.981 08:32:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:54.981 08:32:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:54.981 08:32:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.981 08:32:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:54.981 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.982 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:54.983 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.984 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.985 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.986 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:54.987 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:54.988 08:32:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:54.988 08:32:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:54.988 08:32:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.988 08:32:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:54.988 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:54.989 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:54.990 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.991 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.992 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.993 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:54.994 08:32:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:54.994 08:32:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:54.994 08:32:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:54.994 08:32:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:54.994 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:54.995 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.996 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.997 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.998 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:54.999 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.000 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:55.001 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:55.266 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.267 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.268 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.269 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.270 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.271 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:55.272 08:32:30 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:55.272 08:32:30 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:55.272 08:32:30 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:55.272 08:32:30 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.272 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.273 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:55.274 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:55.275 08:32:30 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:55.275 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:55.276 08:32:30 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:55.276 08:32:30 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:55.276 08:32:30 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.782 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.782 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.042 08:32:31 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:57.042 08:32:31 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.042 08:32:31 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.042 08:32:31 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 ************************************ 00:11:57.042 START TEST nvme_flexible_data_placement 00:11:57.042 ************************************ 00:11:57.042 08:32:31 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:57.302 Initializing NVMe Controllers 00:11:57.302 Attaching to 0000:00:13.0 00:11:57.302 Controller supports FDP Attached to 0000:00:13.0 00:11:57.302 Namespace ID: 1 Endurance Group ID: 1 00:11:57.302 Initialization complete. 00:11:57.302 00:11:57.302 ================================== 00:11:57.302 == FDP tests for Namespace: #01 == 00:11:57.302 ================================== 00:11:57.302 00:11:57.302 Get Feature: FDP: 00:11:57.302 ================= 00:11:57.302 Enabled: Yes 00:11:57.302 FDP configuration Index: 0 00:11:57.302 00:11:57.302 FDP configurations log page 00:11:57.302 =========================== 00:11:57.302 Number of FDP configurations: 1 00:11:57.302 Version: 0 00:11:57.302 Size: 112 00:11:57.302 FDP Configuration Descriptor: 0 00:11:57.302 Descriptor Size: 96 00:11:57.302 Reclaim Group Identifier format: 2 00:11:57.302 FDP Volatile Write Cache: Not Present 00:11:57.302 FDP Configuration: Valid 00:11:57.302 Vendor Specific Size: 0 00:11:57.302 Number of Reclaim Groups: 2 00:11:57.302 Number of Recalim Unit Handles: 8 00:11:57.302 Max Placement Identifiers: 128 00:11:57.302 Number of Namespaces Suppprted: 256 00:11:57.302 Reclaim unit Nominal Size: 6000000 bytes 00:11:57.302 Estimated Reclaim Unit Time Limit: Not Reported 00:11:57.302 RUH Desc #000: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #001: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #002: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #003: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #004: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #005: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #006: RUH Type: Initially Isolated 00:11:57.302 RUH Desc #007: RUH Type: Initially Isolated 00:11:57.302 00:11:57.302 FDP reclaim unit handle usage log page 00:11:57.302 ====================================== 00:11:57.302 Number of Reclaim Unit Handles: 8 00:11:57.302 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:57.302 RUH Usage Desc #001: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #002: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #003: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #004: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #005: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #006: RUH Attributes: Unused 00:11:57.302 RUH Usage Desc #007: RUH Attributes: Unused 00:11:57.302 00:11:57.302 FDP statistics log page 00:11:57.302 ======================= 00:11:57.302 Host bytes with metadata written: 1010610176 00:11:57.302 Media bytes with metadata written: 1010741248 00:11:57.302 Media bytes erased: 0 00:11:57.302 00:11:57.302 FDP Reclaim unit handle status 00:11:57.302 ============================== 00:11:57.302 Number of RUHS descriptors: 2 00:11:57.302 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005c35 00:11:57.302 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:57.302 00:11:57.302 FDP write on placement id: 0 success 00:11:57.302 00:11:57.302 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:57.302 00:11:57.302 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:57.302 00:11:57.302 Get Feature: FDP Events for Placement handle: #0 00:11:57.302 ======================== 00:11:57.302 Number of FDP Events: 6 00:11:57.302 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:57.302 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:57.302 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:57.302 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:57.302 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:57.302 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:57.302 00:11:57.302 FDP events log page 00:11:57.302 =================== 00:11:57.302 Number of FDP events: 1 00:11:57.302 FDP Event #0: 00:11:57.302 Event Type: RU Not Written to Capacity 00:11:57.302 Placement Identifier: Valid 00:11:57.303 NSID: Valid 00:11:57.303 Location: Valid 00:11:57.303 Placement Identifier: 0 00:11:57.303 Event Timestamp: 9 00:11:57.303 Namespace Identifier: 1 00:11:57.303 Reclaim Group Identifier: 0 00:11:57.303 Reclaim Unit Handle Identifier: 0 00:11:57.303 00:11:57.303 FDP test passed 00:11:57.303 00:11:57.303 real 0m0.303s 00:11:57.303 user 0m0.090s 00:11:57.303 sys 0m0.111s 00:11:57.303 08:32:32 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.303 08:32:32 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 ************************************ 00:11:57.303 END TEST nvme_flexible_data_placement 00:11:57.303 ************************************ 00:11:57.303 00:11:57.303 real 0m9.230s 00:11:57.303 user 0m1.660s 00:11:57.303 sys 0m2.656s 00:11:57.303 08:32:32 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.303 08:32:32 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 ************************************ 00:11:57.303 END TEST nvme_fdp 00:11:57.303 ************************************ 00:11:57.563 08:32:32 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:57.563 08:32:32 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:57.563 08:32:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.563 08:32:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.563 08:32:32 -- common/autotest_common.sh@10 -- # set +x 00:11:57.563 ************************************ 00:11:57.563 START TEST nvme_rpc 00:11:57.563 ************************************ 00:11:57.563 08:32:32 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:57.563 * Looking for test storage... 00:11:57.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:57.563 08:32:32 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.563 08:32:32 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.563 08:32:32 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.563 08:32:32 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:57.563 08:32:32 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.823 08:32:32 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.823 --rc genhtml_branch_coverage=1 00:11:57.823 --rc genhtml_function_coverage=1 00:11:57.823 --rc genhtml_legend=1 00:11:57.823 --rc geninfo_all_blocks=1 00:11:57.823 --rc geninfo_unexecuted_blocks=1 00:11:57.823 00:11:57.823 ' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.823 --rc genhtml_branch_coverage=1 00:11:57.823 --rc genhtml_function_coverage=1 00:11:57.823 --rc genhtml_legend=1 00:11:57.823 --rc geninfo_all_blocks=1 00:11:57.823 --rc geninfo_unexecuted_blocks=1 00:11:57.823 00:11:57.823 ' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.823 --rc genhtml_branch_coverage=1 00:11:57.823 --rc genhtml_function_coverage=1 00:11:57.823 --rc genhtml_legend=1 00:11:57.823 --rc geninfo_all_blocks=1 00:11:57.823 --rc geninfo_unexecuted_blocks=1 00:11:57.823 00:11:57.823 ' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.823 --rc genhtml_branch_coverage=1 00:11:57.823 --rc genhtml_function_coverage=1 00:11:57.823 --rc genhtml_legend=1 00:11:57.823 --rc geninfo_all_blocks=1 00:11:57.823 --rc geninfo_unexecuted_blocks=1 00:11:57.823 00:11:57.823 ' 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67061 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:57.823 08:32:32 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67061 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67061 ']' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.823 08:32:32 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 [2024-11-22 08:32:32.883686] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:11:57.823 [2024-11-22 08:32:32.883823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67061 ] 00:11:58.082 [2024-11-22 08:32:33.066349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.342 [2024-11-22 08:32:33.200275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.342 [2024-11-22 08:32:33.200306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.281 08:32:34 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.281 08:32:34 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.281 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:59.541 Nvme0n1 00:11:59.541 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:59.541 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:59.541 request: 00:11:59.541 { 00:11:59.541 "bdev_name": "Nvme0n1", 00:11:59.541 "filename": "non_existing_file", 00:11:59.541 "method": "bdev_nvme_apply_firmware", 00:11:59.541 "req_id": 1 00:11:59.541 } 00:11:59.541 Got JSON-RPC error response 00:11:59.541 response: 00:11:59.541 { 00:11:59.541 "code": -32603, 00:11:59.541 "message": "open file failed." 00:11:59.541 } 00:11:59.541 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:59.541 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:59.541 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:59.801 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:59.801 08:32:34 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67061 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67061 ']' 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67061 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67061 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.801 killing process with pid 67061 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67061' 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67061 00:11:59.801 08:32:34 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67061 00:12:02.342 00:12:02.342 real 0m4.776s 00:12:02.342 user 0m8.445s 00:12:02.342 sys 0m0.963s 00:12:02.342 08:32:37 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.342 ************************************ 00:12:02.342 END TEST nvme_rpc 00:12:02.342 ************************************ 00:12:02.342 08:32:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.342 08:32:37 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:02.342 08:32:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.342 08:32:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.342 08:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:02.342 ************************************ 00:12:02.342 START TEST nvme_rpc_timeouts 00:12:02.342 ************************************ 00:12:02.342 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:02.342 * Looking for test storage... 00:12:02.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:02.342 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.342 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.342 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.602 08:32:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.602 --rc genhtml_branch_coverage=1 00:12:02.602 --rc genhtml_function_coverage=1 00:12:02.602 --rc genhtml_legend=1 00:12:02.602 --rc geninfo_all_blocks=1 00:12:02.602 --rc geninfo_unexecuted_blocks=1 00:12:02.602 00:12:02.602 ' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.602 --rc genhtml_branch_coverage=1 00:12:02.602 --rc genhtml_function_coverage=1 00:12:02.602 --rc genhtml_legend=1 00:12:02.602 --rc geninfo_all_blocks=1 00:12:02.602 --rc geninfo_unexecuted_blocks=1 00:12:02.602 00:12:02.602 ' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.602 --rc genhtml_branch_coverage=1 00:12:02.602 --rc genhtml_function_coverage=1 00:12:02.602 --rc genhtml_legend=1 00:12:02.602 --rc geninfo_all_blocks=1 00:12:02.602 --rc geninfo_unexecuted_blocks=1 00:12:02.602 00:12:02.602 ' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.602 --rc genhtml_branch_coverage=1 00:12:02.602 --rc genhtml_function_coverage=1 00:12:02.602 --rc genhtml_legend=1 00:12:02.602 --rc geninfo_all_blocks=1 00:12:02.602 --rc geninfo_unexecuted_blocks=1 00:12:02.602 00:12:02.602 ' 00:12:02.602 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:02.602 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67143 00:12:02.602 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67143 00:12:02.603 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67179 00:12:02.603 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:02.603 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:02.603 08:32:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67179 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67179 ']' 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.603 08:32:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:02.603 [2024-11-22 08:32:37.613280] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:12:02.603 [2024-11-22 08:32:37.613639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67179 ] 00:12:02.862 [2024-11-22 08:32:37.797726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:02.862 [2024-11-22 08:32:37.930793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.862 [2024-11-22 08:32:37.930829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.242 08:32:38 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.242 Checking default timeout settings: 00:12:04.242 08:32:38 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:12:04.242 08:32:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:04.242 08:32:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:04.242 Making settings changes with rpc: 00:12:04.242 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:04.243 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:04.502 Check default vs. modified settings: 00:12:04.502 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:04.502 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:04.762 Setting action_on_timeout is changed as expected. 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:04.762 Setting timeout_us is changed as expected. 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67143 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:04.762 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:05.021 Setting timeout_admin_us is changed as expected. 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67143 /tmp/settings_modified_67143 00:12:05.021 08:32:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67179 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67179 ']' 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67179 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67179 00:12:05.021 killing process with pid 67179 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67179' 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67179 00:12:05.021 08:32:39 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67179 00:12:07.573 RPC TIMEOUT SETTING TEST PASSED. 00:12:07.574 08:32:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:07.574 00:12:07.574 real 0m5.106s 00:12:07.574 user 0m9.355s 00:12:07.574 sys 0m0.949s 00:12:07.574 ************************************ 00:12:07.574 END TEST nvme_rpc_timeouts 00:12:07.574 ************************************ 00:12:07.574 08:32:42 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.574 08:32:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:07.574 08:32:42 -- spdk/autotest.sh@239 -- # uname -s 00:12:07.574 08:32:42 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:07.574 08:32:42 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:07.574 08:32:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.574 08:32:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.574 08:32:42 -- common/autotest_common.sh@10 -- # set +x 00:12:07.574 ************************************ 00:12:07.574 START TEST sw_hotplug 00:12:07.574 ************************************ 00:12:07.574 08:32:42 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:07.574 * Looking for test storage... 00:12:07.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:07.574 08:32:42 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:07.574 08:32:42 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.574 08:32:42 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.833 08:32:42 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.833 --rc genhtml_branch_coverage=1 00:12:07.833 --rc genhtml_function_coverage=1 00:12:07.833 --rc genhtml_legend=1 00:12:07.833 --rc geninfo_all_blocks=1 00:12:07.833 --rc geninfo_unexecuted_blocks=1 00:12:07.833 00:12:07.833 ' 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.833 --rc genhtml_branch_coverage=1 00:12:07.833 --rc genhtml_function_coverage=1 00:12:07.833 --rc genhtml_legend=1 00:12:07.833 --rc geninfo_all_blocks=1 00:12:07.833 --rc geninfo_unexecuted_blocks=1 00:12:07.833 00:12:07.833 ' 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.833 --rc genhtml_branch_coverage=1 00:12:07.833 --rc genhtml_function_coverage=1 00:12:07.833 --rc genhtml_legend=1 00:12:07.833 --rc geninfo_all_blocks=1 00:12:07.833 --rc geninfo_unexecuted_blocks=1 00:12:07.833 00:12:07.833 ' 00:12:07.833 08:32:42 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.833 --rc genhtml_branch_coverage=1 00:12:07.833 --rc genhtml_function_coverage=1 00:12:07.833 --rc genhtml_legend=1 00:12:07.833 --rc geninfo_all_blocks=1 00:12:07.833 --rc geninfo_unexecuted_blocks=1 00:12:07.833 00:12:07.833 ' 00:12:07.833 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:08.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.662 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.662 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.662 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.662 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:08.662 08:32:43 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:08.662 08:32:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:09.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.490 Waiting for block devices as requested 00:12:09.749 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.749 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.008 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.008 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.294 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:15.294 08:32:50 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:15.294 08:32:50 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.864 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:15.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.864 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:16.124 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:16.384 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.384 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68069 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:16.643 08:32:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:16.643 08:32:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:16.942 Initializing NVMe Controllers 00:12:16.942 Attaching to 0000:00:10.0 00:12:16.942 Attaching to 0000:00:11.0 00:12:16.942 Attached to 0000:00:11.0 00:12:16.942 Attached to 0000:00:10.0 00:12:16.942 Initialization complete. Starting I/O... 00:12:16.942 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:16.942 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:16.942 00:12:17.903 QEMU NVMe Ctrl (12341 ): 1552 I/Os completed (+1552) 00:12:17.903 QEMU NVMe Ctrl (12340 ): 1552 I/Os completed (+1552) 00:12:17.903 00:12:19.281 QEMU NVMe Ctrl (12341 ): 3624 I/Os completed (+2072) 00:12:19.281 QEMU NVMe Ctrl (12340 ): 3625 I/Os completed (+2073) 00:12:19.281 00:12:19.851 QEMU NVMe Ctrl (12341 ): 5816 I/Os completed (+2192) 00:12:19.851 QEMU NVMe Ctrl (12340 ): 5817 I/Os completed (+2192) 00:12:19.851 00:12:21.231 QEMU NVMe Ctrl (12341 ): 7960 I/Os completed (+2144) 00:12:21.231 QEMU NVMe Ctrl (12340 ): 7970 I/Os completed (+2153) 00:12:21.231 00:12:22.168 QEMU NVMe Ctrl (12341 ): 10112 I/Os completed (+2152) 00:12:22.168 QEMU NVMe Ctrl (12340 ): 10129 I/Os completed (+2159) 00:12:22.168 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.737 [2024-11-22 08:32:57.692698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:22.737 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:22.737 [2024-11-22 08:32:57.694869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.695074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.695140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.695168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:22.737 [2024-11-22 08:32:57.697999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.698056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.698075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.698095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.737 [2024-11-22 08:32:57.733206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:22.737 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:22.737 [2024-11-22 08:32:57.734794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.734843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.734873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.734896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:22.737 [2024-11-22 08:32:57.737517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.737558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.737579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 [2024-11-22 08:32:57.737607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:22.737 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:22.997 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.997 08:32:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.997 Attaching to 0000:00:10.0 00:12:22.997 Attached to 0000:00:10.0 00:12:22.997 08:32:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.997 08:32:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.997 08:32:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:22.997 Attaching to 0000:00:11.0 00:12:22.997 Attached to 0000:00:11.0 00:12:23.934 QEMU NVMe Ctrl (12340 ): 2076 I/Os completed (+2076) 00:12:23.934 QEMU NVMe Ctrl (12341 ): 1870 I/Os completed (+1870) 00:12:23.934 00:12:24.871 QEMU NVMe Ctrl (12340 ): 4236 I/Os completed (+2160) 00:12:24.871 QEMU NVMe Ctrl (12341 ): 4030 I/Os completed (+2160) 00:12:24.871 00:12:26.248 QEMU NVMe Ctrl (12340 ): 6345 I/Os completed (+2109) 00:12:26.248 QEMU NVMe Ctrl (12341 ): 6141 I/Os completed (+2111) 00:12:26.248 00:12:27.186 QEMU NVMe Ctrl (12340 ): 8489 I/Os completed (+2144) 00:12:27.186 QEMU NVMe Ctrl (12341 ): 8285 I/Os completed (+2144) 00:12:27.186 00:12:28.123 QEMU NVMe Ctrl (12340 ): 10617 I/Os completed (+2128) 00:12:28.123 QEMU NVMe Ctrl (12341 ): 10417 I/Os completed (+2132) 00:12:28.123 00:12:29.060 QEMU NVMe Ctrl (12340 ): 12789 I/Os completed (+2172) 00:12:29.060 QEMU NVMe Ctrl (12341 ): 12590 I/Os completed (+2173) 00:12:29.060 00:12:29.997 QEMU NVMe Ctrl (12340 ): 14901 I/Os completed (+2112) 00:12:29.997 QEMU NVMe Ctrl (12341 ): 14706 I/Os completed (+2116) 00:12:29.997 00:12:30.934 QEMU NVMe Ctrl (12340 ): 17085 I/Os completed (+2184) 00:12:30.934 QEMU NVMe Ctrl (12341 ): 16892 I/Os completed (+2186) 00:12:30.934 00:12:31.872 QEMU NVMe Ctrl (12340 ): 19225 I/Os completed (+2140) 00:12:31.872 QEMU NVMe Ctrl (12341 ): 19035 I/Os completed (+2143) 00:12:31.872 00:12:32.835 QEMU NVMe Ctrl (12340 ): 21385 I/Os completed (+2160) 00:12:32.835 QEMU NVMe Ctrl (12341 ): 21209 I/Os completed (+2174) 00:12:32.835 00:12:34.213 QEMU NVMe Ctrl (12340 ): 23533 I/Os completed (+2148) 00:12:34.213 QEMU NVMe Ctrl (12341 ): 23361 I/Os completed (+2152) 00:12:34.213 00:12:35.151 QEMU NVMe Ctrl (12340 ): 25693 I/Os completed (+2160) 00:12:35.151 QEMU NVMe Ctrl (12341 ): 25527 I/Os completed (+2166) 00:12:35.151 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:35.151 [2024-11-22 08:33:10.047242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:35.151 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:35.151 [2024-11-22 08:33:10.049044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.049104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.049129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.049152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:35.151 [2024-11-22 08:33:10.051976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.052029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.052049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.052069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:35.151 [2024-11-22 08:33:10.086593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:35.151 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:35.151 [2024-11-22 08:33:10.088164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.088208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.088234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.088257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:35.151 [2024-11-22 08:33:10.090838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.090882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.090903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 [2024-11-22 08:33:10.090923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.151 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:35.411 Attaching to 0000:00:10.0 00:12:35.411 Attached to 0000:00:10.0 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.411 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:35.411 Attaching to 0000:00:11.0 00:12:35.411 Attached to 0000:00:11.0 00:12:35.981 QEMU NVMe Ctrl (12340 ): 1292 I/Os completed (+1292) 00:12:35.981 QEMU NVMe Ctrl (12341 ): 1072 I/Os completed (+1072) 00:12:35.981 00:12:36.920 QEMU NVMe Ctrl (12340 ): 3492 I/Os completed (+2200) 00:12:36.920 QEMU NVMe Ctrl (12341 ): 3274 I/Os completed (+2202) 00:12:36.920 00:12:37.857 QEMU NVMe Ctrl (12340 ): 5744 I/Os completed (+2252) 00:12:37.857 QEMU NVMe Ctrl (12341 ): 5526 I/Os completed (+2252) 00:12:37.857 00:12:39.235 QEMU NVMe Ctrl (12340 ): 7996 I/Os completed (+2252) 00:12:39.235 QEMU NVMe Ctrl (12341 ): 7778 I/Os completed (+2252) 00:12:39.235 00:12:40.171 QEMU NVMe Ctrl (12340 ): 10220 I/Os completed (+2224) 00:12:40.171 QEMU NVMe Ctrl (12341 ): 10002 I/Os completed (+2224) 00:12:40.171 00:12:41.109 QEMU NVMe Ctrl (12340 ): 12420 I/Os completed (+2200) 00:12:41.109 QEMU NVMe Ctrl (12341 ): 12202 I/Os completed (+2200) 00:12:41.109 00:12:42.048 QEMU NVMe Ctrl (12340 ): 14648 I/Os completed (+2228) 00:12:42.048 QEMU NVMe Ctrl (12341 ): 14430 I/Os completed (+2228) 00:12:42.048 00:12:42.990 QEMU NVMe Ctrl (12340 ): 16852 I/Os completed (+2204) 00:12:42.990 QEMU NVMe Ctrl (12341 ): 16634 I/Os completed (+2204) 00:12:42.990 00:12:44.039 QEMU NVMe Ctrl (12340 ): 19048 I/Os completed (+2196) 00:12:44.039 QEMU NVMe Ctrl (12341 ): 18830 I/Os completed (+2196) 00:12:44.039 00:12:44.987 QEMU NVMe Ctrl (12340 ): 21248 I/Os completed (+2200) 00:12:44.987 QEMU NVMe Ctrl (12341 ): 21030 I/Os completed (+2200) 00:12:44.987 00:12:45.926 QEMU NVMe Ctrl (12340 ): 23448 I/Os completed (+2200) 00:12:45.926 QEMU NVMe Ctrl (12341 ): 23230 I/Os completed (+2200) 00:12:45.926 00:12:46.864 QEMU NVMe Ctrl (12340 ): 25656 I/Os completed (+2208) 00:12:46.864 QEMU NVMe Ctrl (12341 ): 25438 I/Os completed (+2208) 00:12:46.864 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:47.433 [2024-11-22 08:33:22.406725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:47.433 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:47.433 [2024-11-22 08:33:22.408443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.408493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.408514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.408536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:47.433 [2024-11-22 08:33:22.411430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.411477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.411495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.411516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:12:47.433 EAL: Scan for (pci) bus failed. 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:47.433 [2024-11-22 08:33:22.444164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:47.433 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:47.433 [2024-11-22 08:33:22.445700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.445747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.445769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.445791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:47.433 [2024-11-22 08:33:22.448272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.448314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.448337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 [2024-11-22 08:33:22.448354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:47.433 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:47.433 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:47.433 EAL: Scan for (pci) bus failed. 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:47.693 Attaching to 0000:00:10.0 00:12:47.693 Attached to 0000:00:10.0 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.693 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:47.693 Attaching to 0000:00:11.0 00:12:47.693 Attached to 0000:00:11.0 00:12:47.693 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:47.693 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:47.693 [2024-11-22 08:33:22.762375] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:59.913 08:33:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:59.913 08:33:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.913 08:33:34 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.07 00:12:59.913 08:33:34 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.07 00:12:59.913 08:33:34 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:59.913 08:33:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.07 00:12:59.913 08:33:34 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.07 2 00:12:59.913 remove_attach_helper took 43.07s to complete (handling 2 nvme drive(s)) 08:33:34 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68069 00:13:06.489 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68069) - No such process 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68069 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68609 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:06.489 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68609 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68609 ']' 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.489 08:33:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.489 [2024-11-22 08:33:40.874360] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:13:06.489 [2024-11-22 08:33:40.874505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68609 ] 00:13:06.489 [2024-11-22 08:33:41.055817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.489 [2024-11-22 08:33:41.167504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.058 08:33:41 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.058 08:33:41 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:13:07.058 08:33:41 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:07.058 08:33:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.058 08:33:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.058 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:07.058 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:07.058 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:07.058 08:33:42 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:07.058 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:07.059 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:07.059 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:07.059 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:07.059 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.632 [2024-11-22 08:33:48.092832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:13.632 [2024-11-22 08:33:48.095113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.095160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.095180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.095215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.095227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.095242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.095255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.095271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.095282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.095300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.095311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:13.632 [2024-11-22 08:33:48.492175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:13.632 [2024-11-22 08:33:48.494469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.494511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.494546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.494564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.494578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.494590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.494605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.494615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.494629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 [2024-11-22 08:33:48.494641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.632 [2024-11-22 08:33:48.494655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.632 [2024-11-22 08:33:48.494666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.632 08:33:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:13.632 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.891 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:14.151 08:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:14.151 08:33:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:14.151 08:33:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.364 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.364 08:34:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.365 08:34:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.365 08:34:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.365 08:34:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.365 08:34:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.365 [2024-11-22 08:34:01.171844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:26.365 [2024-11-22 08:34:01.174165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.365 [2024-11-22 08:34:01.174223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.365 [2024-11-22 08:34:01.174240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.365 [2024-11-22 08:34:01.174295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.365 [2024-11-22 08:34:01.174310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.365 [2024-11-22 08:34:01.174325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.365 [2024-11-22 08:34:01.174338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.365 [2024-11-22 08:34:01.174351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.365 [2024-11-22 08:34:01.174363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.365 [2024-11-22 08:34:01.174377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.365 [2024-11-22 08:34:01.174388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.365 [2024-11-22 08:34:01.174403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.365 08:34:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:26.365 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:26.626 [2024-11-22 08:34:01.571152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:26.626 [2024-11-22 08:34:01.573433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.626 [2024-11-22 08:34:01.573469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.626 [2024-11-22 08:34:01.573490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.626 [2024-11-22 08:34:01.573524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.626 [2024-11-22 08:34:01.573537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.626 [2024-11-22 08:34:01.573549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.626 [2024-11-22 08:34:01.573564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.626 [2024-11-22 08:34:01.573575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.626 [2024-11-22 08:34:01.573589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.626 [2024-11-22 08:34:01.573602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.626 [2024-11-22 08:34:01.573615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.626 [2024-11-22 08:34:01.573626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.626 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.626 08:34:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.626 08:34:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.885 08:34:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:26.886 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:27.146 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.146 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.146 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.146 08:34:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:27.146 08:34:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:27.146 08:34:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.146 08:34:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.368 08:34:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.368 08:34:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.368 08:34:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.368 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.368 [2024-11-22 08:34:14.150995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:39.368 [2024-11-22 08:34:14.153511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.368 [2024-11-22 08:34:14.153557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.368 [2024-11-22 08:34:14.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.368 [2024-11-22 08:34:14.153598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.368 [2024-11-22 08:34:14.153610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.368 [2024-11-22 08:34:14.153628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.369 [2024-11-22 08:34:14.153641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.369 [2024-11-22 08:34:14.153655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.369 [2024-11-22 08:34:14.153667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.369 [2024-11-22 08:34:14.153682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.369 [2024-11-22 08:34:14.153693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.369 [2024-11-22 08:34:14.153707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.369 08:34:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.369 08:34:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.369 08:34:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:39.369 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:39.630 [2024-11-22 08:34:14.550323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:39.630 [2024-11-22 08:34:14.552650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.630 [2024-11-22 08:34:14.552687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.630 [2024-11-22 08:34:14.552721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.630 [2024-11-22 08:34:14.552742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.630 [2024-11-22 08:34:14.552755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.630 [2024-11-22 08:34:14.552767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.630 [2024-11-22 08:34:14.552782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.630 [2024-11-22 08:34:14.552792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.630 [2024-11-22 08:34:14.552809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.630 [2024-11-22 08:34:14.552821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.630 [2024-11-22 08:34:14.552834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.630 [2024-11-22 08:34:14.552845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.890 08:34:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.890 08:34:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.890 08:34:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:39.890 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:40.150 08:34:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.150 08:34:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:52.374 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:52.374 08:34:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:52.374 08:34:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:58.944 [2024-11-22 08:34:33.276277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.944 [2024-11-22 08:34:33.277876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.277919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.277946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.277984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.277997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.278013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.278026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.278040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.944 [2024-11-22 08:34:33.278052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.278068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.278079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.278097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:58.944 [2024-11-22 08:34:33.675633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:58.944 [2024-11-22 08:34:33.677703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.677742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.677761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.677781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.677794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.677806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.677821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.677831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.677845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 [2024-11-22 08:34:33.677858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.944 [2024-11-22 08:34:33.677870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.944 [2024-11-22 08:34:33.677882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.944 08:34:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:58.944 08:34:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:58.944 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:58.944 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:58.944 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.203 08:34:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.452 [2024-11-22 08:34:46.355262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:11.452 [2024-11-22 08:34:46.356983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.452 [2024-11-22 08:34:46.357031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.452 [2024-11-22 08:34:46.357048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.452 [2024-11-22 08:34:46.357074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.452 [2024-11-22 08:34:46.357086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.452 [2024-11-22 08:34:46.357101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.452 [2024-11-22 08:34:46.357114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.452 [2024-11-22 08:34:46.357127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.452 [2024-11-22 08:34:46.357139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.452 [2024-11-22 08:34:46.357154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.452 [2024-11-22 08:34:46.357166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.452 [2024-11-22 08:34:46.357180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.452 08:34:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:11.452 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:12.020 08:34:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.020 08:34:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:12.020 08:34:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:12.020 08:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:12.020 [2024-11-22 08:34:47.054134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:12.020 [2024-11-22 08:34:47.056352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.020 [2024-11-22 08:34:47.056389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.020 [2024-11-22 08:34:47.056423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.020 [2024-11-22 08:34:47.056444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.020 [2024-11-22 08:34:47.056460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.020 [2024-11-22 08:34:47.056473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.020 [2024-11-22 08:34:47.056489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.020 [2024-11-22 08:34:47.056500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.020 [2024-11-22 08:34:47.056514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.020 [2024-11-22 08:34:47.056527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.020 [2024-11-22 08:34:47.056540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.020 [2024-11-22 08:34:47.056552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:12.587 08:34:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.587 08:34:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:12.587 08:34:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.587 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:12.847 08:34:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.062 [2024-11-22 08:34:59.933451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:25.062 [2024-11-22 08:34:59.936335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.062 [2024-11-22 08:34:59.936375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.062 [2024-11-22 08:34:59.936392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.062 [2024-11-22 08:34:59.936416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.062 [2024-11-22 08:34:59.936427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.062 [2024-11-22 08:34:59.936443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.062 [2024-11-22 08:34:59.936456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.062 [2024-11-22 08:34:59.936473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.062 [2024-11-22 08:34:59.936485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.062 [2024-11-22 08:34:59.936500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.062 [2024-11-22 08:34:59.936511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.062 [2024-11-22 08:34:59.936525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 08:34:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:25.062 08:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:25.321 [2024-11-22 08:35:00.332806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:25.321 [2024-11-22 08:35:00.334456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.321 [2024-11-22 08:35:00.334500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.321 [2024-11-22 08:35:00.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.321 [2024-11-22 08:35:00.334540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.321 [2024-11-22 08:35:00.334554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.321 [2024-11-22 08:35:00.334566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.321 [2024-11-22 08:35:00.334583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.321 [2024-11-22 08:35:00.334594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.321 [2024-11-22 08:35:00.334608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.321 [2024-11-22 08:35:00.334620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.321 [2024-11-22 08:35:00.334648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.321 [2024-11-22 08:35:00.334660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.580 08:35:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.580 08:35:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.580 08:35:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:25.580 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.839 08:35:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:14:38.053 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:38.053 08:35:12 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68609 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68609 ']' 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68609 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68609 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.053 killing process with pid 68609 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68609' 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68609 00:14:38.053 08:35:12 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68609 00:14:40.632 08:35:15 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:40.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:41.463 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:41.463 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:41.463 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.463 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.722 00:14:41.722 real 2m34.157s 00:14:41.722 user 1m51.932s 00:14:41.722 sys 0m22.405s 00:14:41.722 08:35:16 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.722 08:35:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:41.722 ************************************ 00:14:41.722 END TEST sw_hotplug 00:14:41.722 ************************************ 00:14:41.722 08:35:16 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:41.722 08:35:16 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:41.722 08:35:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:41.722 08:35:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.722 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:14:41.722 ************************************ 00:14:41.722 START TEST nvme_xnvme 00:14:41.722 ************************************ 00:14:41.722 08:35:16 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:41.983 * Looking for test storage... 00:14:41.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.983 08:35:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:41.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.983 --rc genhtml_branch_coverage=1 00:14:41.983 --rc genhtml_function_coverage=1 00:14:41.983 --rc genhtml_legend=1 00:14:41.983 --rc geninfo_all_blocks=1 00:14:41.983 --rc geninfo_unexecuted_blocks=1 00:14:41.983 00:14:41.983 ' 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:41.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.983 --rc genhtml_branch_coverage=1 00:14:41.983 --rc genhtml_function_coverage=1 00:14:41.983 --rc genhtml_legend=1 00:14:41.983 --rc geninfo_all_blocks=1 00:14:41.983 --rc geninfo_unexecuted_blocks=1 00:14:41.983 00:14:41.983 ' 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:41.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.983 --rc genhtml_branch_coverage=1 00:14:41.983 --rc genhtml_function_coverage=1 00:14:41.983 --rc genhtml_legend=1 00:14:41.983 --rc geninfo_all_blocks=1 00:14:41.983 --rc geninfo_unexecuted_blocks=1 00:14:41.983 00:14:41.983 ' 00:14:41.983 08:35:16 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:41.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.983 --rc genhtml_branch_coverage=1 00:14:41.983 --rc genhtml_function_coverage=1 00:14:41.983 --rc genhtml_legend=1 00:14:41.983 --rc geninfo_all_blocks=1 00:14:41.983 --rc geninfo_unexecuted_blocks=1 00:14:41.983 00:14:41.983 ' 00:14:41.983 08:35:16 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:41.984 08:35:16 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:41.984 08:35:16 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:41.984 08:35:16 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:41.984 08:35:16 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:41.984 #define SPDK_CONFIG_H 00:14:41.984 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:41.984 #define SPDK_CONFIG_APPS 1 00:14:41.984 #define SPDK_CONFIG_ARCH native 00:14:41.984 #define SPDK_CONFIG_ASAN 1 00:14:41.984 #undef SPDK_CONFIG_AVAHI 00:14:41.984 #undef SPDK_CONFIG_CET 00:14:41.984 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:41.984 #define SPDK_CONFIG_COVERAGE 1 00:14:41.984 #define SPDK_CONFIG_CROSS_PREFIX 00:14:41.984 #undef SPDK_CONFIG_CRYPTO 00:14:41.984 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:41.984 #undef SPDK_CONFIG_CUSTOMOCF 00:14:41.984 #undef SPDK_CONFIG_DAOS 00:14:41.984 #define SPDK_CONFIG_DAOS_DIR 00:14:41.984 #define SPDK_CONFIG_DEBUG 1 00:14:41.984 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:41.984 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:41.984 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:41.984 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:41.984 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:41.984 #undef SPDK_CONFIG_DPDK_UADK 00:14:41.984 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:41.984 #define SPDK_CONFIG_EXAMPLES 1 00:14:41.984 #undef SPDK_CONFIG_FC 00:14:41.984 #define SPDK_CONFIG_FC_PATH 00:14:41.984 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:41.985 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:41.985 #define SPDK_CONFIG_FSDEV 1 00:14:41.985 #undef SPDK_CONFIG_FUSE 00:14:41.985 #undef SPDK_CONFIG_FUZZER 00:14:41.985 #define SPDK_CONFIG_FUZZER_LIB 00:14:41.985 #undef SPDK_CONFIG_GOLANG 00:14:41.985 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:41.985 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:41.985 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:41.985 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:41.985 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:41.985 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:41.985 #undef SPDK_CONFIG_HAVE_LZ4 00:14:41.985 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:41.985 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:41.985 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:41.985 #define SPDK_CONFIG_IDXD 1 00:14:41.985 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:41.985 #undef SPDK_CONFIG_IPSEC_MB 00:14:41.985 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:41.985 #define SPDK_CONFIG_ISAL 1 00:14:41.985 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:41.985 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:41.985 #define SPDK_CONFIG_LIBDIR 00:14:41.985 #undef SPDK_CONFIG_LTO 00:14:41.985 #define SPDK_CONFIG_MAX_LCORES 128 00:14:41.985 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:41.985 #define SPDK_CONFIG_NVME_CUSE 1 00:14:41.985 #undef SPDK_CONFIG_OCF 00:14:41.985 #define SPDK_CONFIG_OCF_PATH 00:14:41.985 #define SPDK_CONFIG_OPENSSL_PATH 00:14:41.985 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:41.985 #define SPDK_CONFIG_PGO_DIR 00:14:41.985 #undef SPDK_CONFIG_PGO_USE 00:14:41.985 #define SPDK_CONFIG_PREFIX /usr/local 00:14:41.985 #undef SPDK_CONFIG_RAID5F 00:14:41.985 #undef SPDK_CONFIG_RBD 00:14:41.985 #define SPDK_CONFIG_RDMA 1 00:14:41.985 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:41.985 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:41.985 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:41.985 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:41.985 #define SPDK_CONFIG_SHARED 1 00:14:41.985 #undef SPDK_CONFIG_SMA 00:14:41.985 #define SPDK_CONFIG_TESTS 1 00:14:41.985 #undef SPDK_CONFIG_TSAN 00:14:41.985 #define SPDK_CONFIG_UBLK 1 00:14:41.985 #define SPDK_CONFIG_UBSAN 1 00:14:41.985 #undef SPDK_CONFIG_UNIT_TESTS 00:14:41.985 #undef SPDK_CONFIG_URING 00:14:41.985 #define SPDK_CONFIG_URING_PATH 00:14:41.985 #undef SPDK_CONFIG_URING_ZNS 00:14:41.985 #undef SPDK_CONFIG_USDT 00:14:41.985 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:41.985 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:41.985 #undef SPDK_CONFIG_VFIO_USER 00:14:41.985 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:41.985 #define SPDK_CONFIG_VHOST 1 00:14:41.985 #define SPDK_CONFIG_VIRTIO 1 00:14:41.985 #undef SPDK_CONFIG_VTUNE 00:14:41.985 #define SPDK_CONFIG_VTUNE_DIR 00:14:41.985 #define SPDK_CONFIG_WERROR 1 00:14:41.985 #define SPDK_CONFIG_WPDK_DIR 00:14:41.985 #define SPDK_CONFIG_XNVME 1 00:14:41.985 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:41.985 08:35:16 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:41.985 08:35:16 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.985 08:35:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.985 08:35:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.985 08:35:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.985 08:35:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.985 08:35:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.985 08:35:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.985 08:35:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.985 08:35:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:41.985 08:35:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.985 08:35:16 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:41.985 08:35:16 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:41.985 08:35:17 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:41.985 08:35:17 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:41.986 08:35:17 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69971 ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69971 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3GyHz3 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:41.987 08:35:17 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.3GyHz3/tests/xnvme /tmp/spdk.3GyHz3 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977026560 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591056384 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977026560 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591056384 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:42.247 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97183236096 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2519543808 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:42.248 * Looking for test storage... 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13977026560 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:42.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:42.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.248 --rc genhtml_branch_coverage=1 00:14:42.248 --rc genhtml_function_coverage=1 00:14:42.248 --rc genhtml_legend=1 00:14:42.248 --rc geninfo_all_blocks=1 00:14:42.248 --rc geninfo_unexecuted_blocks=1 00:14:42.248 00:14:42.248 ' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:42.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.248 --rc genhtml_branch_coverage=1 00:14:42.248 --rc genhtml_function_coverage=1 00:14:42.248 --rc genhtml_legend=1 00:14:42.248 --rc geninfo_all_blocks=1 00:14:42.248 --rc geninfo_unexecuted_blocks=1 00:14:42.248 00:14:42.248 ' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:42.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.248 --rc genhtml_branch_coverage=1 00:14:42.248 --rc genhtml_function_coverage=1 00:14:42.248 --rc genhtml_legend=1 00:14:42.248 --rc geninfo_all_blocks=1 00:14:42.248 --rc geninfo_unexecuted_blocks=1 00:14:42.248 00:14:42.248 ' 00:14:42.248 08:35:17 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:42.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.248 --rc genhtml_branch_coverage=1 00:14:42.248 --rc genhtml_function_coverage=1 00:14:42.248 --rc genhtml_legend=1 00:14:42.248 --rc geninfo_all_blocks=1 00:14:42.248 --rc geninfo_unexecuted_blocks=1 00:14:42.248 00:14:42.248 ' 00:14:42.248 08:35:17 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.248 08:35:17 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.248 08:35:17 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.248 08:35:17 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.248 08:35:17 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.248 08:35:17 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:42.248 08:35:17 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:42.248 08:35:17 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:42.249 08:35:17 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:42.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:43.076 Waiting for block devices as requested 00:14:43.076 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.335 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.594 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:48.866 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:48.866 08:35:23 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:49.124 08:35:23 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:49.124 08:35:23 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:49.382 08:35:24 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:49.382 No valid GPT data, bailing 00:14:49.382 08:35:24 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:49.382 08:35:24 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:49.382 08:35:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:49.382 08:35:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:49.382 08:35:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.382 08:35:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.382 ************************************ 00:14:49.382 START TEST xnvme_rpc 00:14:49.382 ************************************ 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70366 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70366 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70366 ']' 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.382 08:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.382 [2024-11-22 08:35:24.423159] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:14:49.382 [2024-11-22 08:35:24.423270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70366 ] 00:14:49.651 [2024-11-22 08:35:24.604287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.651 [2024-11-22 08:35:24.710521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.591 xnvme_bdev 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.591 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70366 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70366 ']' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70366 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70366 00:14:50.850 killing process with pid 70366 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70366' 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70366 00:14:50.850 08:35:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70366 00:14:53.384 00:14:53.384 real 0m3.735s 00:14:53.384 user 0m3.784s 00:14:53.384 sys 0m0.532s 00:14:53.384 08:35:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.384 ************************************ 00:14:53.384 END TEST xnvme_rpc 00:14:53.384 ************************************ 00:14:53.384 08:35:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.384 08:35:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:53.384 08:35:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.384 08:35:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.384 08:35:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.384 ************************************ 00:14:53.384 START TEST xnvme_bdevperf 00:14:53.384 ************************************ 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:53.384 08:35:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.384 { 00:14:53.384 "subsystems": [ 00:14:53.384 { 00:14:53.384 "subsystem": "bdev", 00:14:53.384 "config": [ 00:14:53.384 { 00:14:53.384 "params": { 00:14:53.384 "io_mechanism": "libaio", 00:14:53.384 "conserve_cpu": false, 00:14:53.384 "filename": "/dev/nvme0n1", 00:14:53.384 "name": "xnvme_bdev" 00:14:53.384 }, 00:14:53.384 "method": "bdev_xnvme_create" 00:14:53.384 }, 00:14:53.384 { 00:14:53.384 "method": "bdev_wait_for_examine" 00:14:53.384 } 00:14:53.384 ] 00:14:53.384 } 00:14:53.384 ] 00:14:53.384 } 00:14:53.384 [2024-11-22 08:35:28.210930] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:14:53.384 [2024-11-22 08:35:28.211229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70445 ] 00:14:53.384 [2024-11-22 08:35:28.391618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.643 [2024-11-22 08:35:28.493284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.901 Running I/O for 5 seconds... 00:14:55.792 44348.00 IOPS, 173.23 MiB/s [2024-11-22T08:35:32.258Z] 44697.00 IOPS, 174.60 MiB/s [2024-11-22T08:35:33.194Z] 44722.33 IOPS, 174.70 MiB/s [2024-11-22T08:35:34.129Z] 44319.50 IOPS, 173.12 MiB/s [2024-11-22T08:35:34.129Z] 44332.40 IOPS, 173.17 MiB/s 00:14:59.042 Latency(us) 00:14:59.042 [2024-11-22T08:35:34.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.042 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:59.042 xnvme_bdev : 5.00 44313.44 173.10 0.00 0.00 1441.09 450.72 5079.70 00:14:59.042 [2024-11-22T08:35:34.129Z] =================================================================================================================== 00:14:59.042 [2024-11-22T08:35:34.129Z] Total : 44313.44 173.10 0.00 0.00 1441.09 450.72 5079.70 00:14:59.980 08:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.980 08:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:59.980 08:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:59.980 08:35:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:59.980 08:35:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.980 { 00:14:59.980 "subsystems": [ 00:14:59.980 { 00:14:59.980 "subsystem": "bdev", 00:14:59.980 "config": [ 00:14:59.980 { 00:14:59.980 "params": { 00:14:59.980 "io_mechanism": "libaio", 00:14:59.980 "conserve_cpu": false, 00:14:59.980 "filename": "/dev/nvme0n1", 00:14:59.980 "name": "xnvme_bdev" 00:14:59.980 }, 00:14:59.980 "method": "bdev_xnvme_create" 00:14:59.980 }, 00:14:59.980 { 00:14:59.980 "method": "bdev_wait_for_examine" 00:14:59.980 } 00:14:59.980 ] 00:14:59.980 } 00:14:59.980 ] 00:14:59.980 } 00:14:59.980 [2024-11-22 08:35:35.001121] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:14:59.980 [2024-11-22 08:35:35.001370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:15:00.239 [2024-11-22 08:35:35.182937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.239 [2024-11-22 08:35:35.291930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.807 Running I/O for 5 seconds... 00:15:02.682 44077.00 IOPS, 172.18 MiB/s [2024-11-22T08:35:38.706Z] 44395.00 IOPS, 173.42 MiB/s [2024-11-22T08:35:39.642Z] 44216.67 IOPS, 172.72 MiB/s [2024-11-22T08:35:41.022Z] 44997.25 IOPS, 175.77 MiB/s 00:15:05.935 Latency(us) 00:15:05.935 [2024-11-22T08:35:41.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.935 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:05.935 xnvme_bdev : 5.00 45675.04 178.42 0.00 0.00 1397.78 131.60 3039.92 00:15:05.935 [2024-11-22T08:35:41.022Z] =================================================================================================================== 00:15:05.935 [2024-11-22T08:35:41.022Z] Total : 45675.04 178.42 0.00 0.00 1397.78 131.60 3039.92 00:15:06.873 00:15:06.873 real 0m13.583s 00:15:06.873 user 0m4.791s 00:15:06.873 sys 0m5.970s 00:15:06.873 08:35:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.873 ************************************ 00:15:06.873 END TEST xnvme_bdevperf 00:15:06.873 ************************************ 00:15:06.873 08:35:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:06.873 08:35:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:06.873 08:35:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.873 08:35:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.873 08:35:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.873 ************************************ 00:15:06.873 START TEST xnvme_fio_plugin 00:15:06.873 ************************************ 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:06.873 { 00:15:06.873 "subsystems": [ 00:15:06.873 { 00:15:06.873 "subsystem": "bdev", 00:15:06.873 "config": [ 00:15:06.873 { 00:15:06.873 "params": { 00:15:06.873 "io_mechanism": "libaio", 00:15:06.873 "conserve_cpu": false, 00:15:06.873 "filename": "/dev/nvme0n1", 00:15:06.873 "name": "xnvme_bdev" 00:15:06.873 }, 00:15:06.873 "method": "bdev_xnvme_create" 00:15:06.873 }, 00:15:06.873 { 00:15:06.873 "method": "bdev_wait_for_examine" 00:15:06.873 } 00:15:06.873 ] 00:15:06.873 } 00:15:06.873 ] 00:15:06.873 } 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.873 08:35:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.133 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:07.133 fio-3.35 00:15:07.133 Starting 1 thread 00:15:13.705 00:15:13.705 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70645: Fri Nov 22 08:35:47 2024 00:15:13.705 read: IOPS=54.6k, BW=213MiB/s (224MB/s)(1067MiB/5001msec) 00:15:13.705 slat (usec): min=4, max=814, avg=15.96, stdev=20.76 00:15:13.705 clat (usec): min=76, max=12832, avg=709.36, stdev=450.10 00:15:13.705 lat (usec): min=123, max=12837, avg=725.32, stdev=453.17 00:15:13.705 clat percentiles (usec): 00:15:13.705 | 1.00th=[ 157], 5.00th=[ 233], 10.00th=[ 297], 20.00th=[ 400], 00:15:13.705 | 30.00th=[ 482], 40.00th=[ 562], 50.00th=[ 644], 60.00th=[ 725], 00:15:13.705 | 70.00th=[ 816], 80.00th=[ 922], 90.00th=[ 1090], 95.00th=[ 1319], 00:15:13.705 | 99.00th=[ 2671], 99.50th=[ 3294], 99.90th=[ 4293], 99.95th=[ 4621], 00:15:13.705 | 99.99th=[ 5276] 00:15:13.705 bw ( KiB/s): min=202976, max=262000, per=100.00%, avg=219339.22, stdev=17043.51, samples=9 00:15:13.705 iops : min=50744, max=65500, avg=54834.67, stdev=4260.93, samples=9 00:15:13.705 lat (usec) : 100=0.06%, 250=6.21%, 500=26.01%, 750=31.16%, 1000=21.81% 00:15:13.705 lat (msec) : 2=12.69%, 4=1.88%, 10=0.18%, 20=0.01% 00:15:13.705 cpu : usr=26.20%, sys=53.52%, ctx=66, majf=0, minf=764 00:15:13.705 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=9.9%, 16=25.7%, 32=58.2%, >=64=1.9% 00:15:13.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.705 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:13.705 issued rwts: total=273046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.705 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:13.705 00:15:13.705 Run status group 0 (all jobs): 00:15:13.705 READ: bw=213MiB/s (224MB/s), 213MiB/s-213MiB/s (224MB/s-224MB/s), io=1067MiB (1118MB), run=5001-5001msec 00:15:14.283 ----------------------------------------------------- 00:15:14.283 Suppressions used: 00:15:14.283 count bytes template 00:15:14.283 1 11 /usr/src/fio/parse.c 00:15:14.283 1 8 libtcmalloc_minimal.so 00:15:14.283 1 904 libcrypto.so 00:15:14.283 ----------------------------------------------------- 00:15:14.283 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.283 { 00:15:14.283 "subsystems": [ 00:15:14.283 { 00:15:14.283 "subsystem": "bdev", 00:15:14.283 "config": [ 00:15:14.283 { 00:15:14.283 "params": { 00:15:14.283 "io_mechanism": "libaio", 00:15:14.283 "conserve_cpu": false, 00:15:14.283 "filename": "/dev/nvme0n1", 00:15:14.283 "name": "xnvme_bdev" 00:15:14.283 }, 00:15:14.283 "method": "bdev_xnvme_create" 00:15:14.283 }, 00:15:14.283 { 00:15:14.283 "method": "bdev_wait_for_examine" 00:15:14.283 } 00:15:14.283 ] 00:15:14.283 } 00:15:14.283 ] 00:15:14.283 } 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.283 08:35:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.283 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.283 fio-3.35 00:15:14.283 Starting 1 thread 00:15:20.863 00:15:20.863 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70742: Fri Nov 22 08:35:55 2024 00:15:20.863 write: IOPS=71.4k, BW=279MiB/s (292MB/s)(1394MiB/5001msec); 0 zone resets 00:15:20.863 slat (usec): min=4, max=969, avg=11.50, stdev=23.63 00:15:20.863 clat (usec): min=82, max=5032, avg=599.04, stdev=241.76 00:15:20.863 lat (usec): min=132, max=5103, avg=610.53, stdev=239.40 00:15:20.863 clat percentiles (usec): 00:15:20.863 | 1.00th=[ 178], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 433], 00:15:20.863 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 627], 00:15:20.863 | 70.00th=[ 685], 80.00th=[ 742], 90.00th=[ 840], 95.00th=[ 947], 00:15:20.863 | 99.00th=[ 1336], 99.50th=[ 1614], 99.90th=[ 2868], 99.95th=[ 3425], 00:15:20.863 | 99.99th=[ 4228] 00:15:20.863 bw ( KiB/s): min=233104, max=306392, per=100.00%, avg=289213.33, stdev=27073.42, samples=9 00:15:20.863 iops : min=58276, max=76598, avg=72303.33, stdev=6768.35, samples=9 00:15:20.863 lat (usec) : 100=0.04%, 250=3.82%, 500=28.55%, 750=48.97%, 1000=15.15% 00:15:20.863 lat (msec) : 2=3.17%, 4=0.29%, 10=0.02% 00:15:20.863 cpu : usr=36.66%, sys=49.92%, ctx=14, majf=0, minf=764 00:15:20.864 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=7.3%, 16=22.4%, 32=64.8%, >=64=2.3% 00:15:20.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.864 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.7%, >=64=0.0% 00:15:20.864 issued rwts: total=0,356912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:20.864 00:15:20.864 Run status group 0 (all jobs): 00:15:20.864 WRITE: bw=279MiB/s (292MB/s), 279MiB/s-279MiB/s (292MB/s-292MB/s), io=1394MiB (1462MB), run=5001-5001msec 00:15:21.430 ----------------------------------------------------- 00:15:21.430 Suppressions used: 00:15:21.430 count bytes template 00:15:21.430 1 11 /usr/src/fio/parse.c 00:15:21.430 1 8 libtcmalloc_minimal.so 00:15:21.430 1 904 libcrypto.so 00:15:21.430 ----------------------------------------------------- 00:15:21.430 00:15:21.430 00:15:21.430 real 0m14.597s 00:15:21.430 user 0m6.662s 00:15:21.430 sys 0m5.911s 00:15:21.430 08:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.430 08:35:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.430 ************************************ 00:15:21.430 END TEST xnvme_fio_plugin 00:15:21.430 ************************************ 00:15:21.430 08:35:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:21.430 08:35:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:21.430 08:35:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:21.430 08:35:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:21.430 08:35:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.430 08:35:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.430 08:35:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.430 ************************************ 00:15:21.431 START TEST xnvme_rpc 00:15:21.431 ************************************ 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70830 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70830 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70830 ']' 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.431 08:35:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.690 [2024-11-22 08:35:56.555937] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:15:21.690 [2024-11-22 08:35:56.556083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70830 ] 00:15:21.690 [2024-11-22 08:35:56.734015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.949 [2024-11-22 08:35:56.839526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 xnvme_bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70830 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70830 ']' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70830 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70830 00:15:22.886 killing process with pid 70830 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70830' 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70830 00:15:22.886 08:35:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70830 00:15:25.421 00:15:25.421 real 0m3.727s 00:15:25.421 user 0m3.771s 00:15:25.421 sys 0m0.533s 00:15:25.421 08:36:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.421 ************************************ 00:15:25.421 END TEST xnvme_rpc 00:15:25.421 ************************************ 00:15:25.421 08:36:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.421 08:36:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:25.421 08:36:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.421 08:36:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.421 08:36:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.421 ************************************ 00:15:25.421 START TEST xnvme_bdevperf 00:15:25.421 ************************************ 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.421 08:36:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.421 { 00:15:25.421 "subsystems": [ 00:15:25.421 { 00:15:25.421 "subsystem": "bdev", 00:15:25.421 "config": [ 00:15:25.421 { 00:15:25.421 "params": { 00:15:25.421 "io_mechanism": "libaio", 00:15:25.422 "conserve_cpu": true, 00:15:25.422 "filename": "/dev/nvme0n1", 00:15:25.422 "name": "xnvme_bdev" 00:15:25.422 }, 00:15:25.422 "method": "bdev_xnvme_create" 00:15:25.422 }, 00:15:25.422 { 00:15:25.422 "method": "bdev_wait_for_examine" 00:15:25.422 } 00:15:25.422 ] 00:15:25.422 } 00:15:25.422 ] 00:15:25.422 } 00:15:25.422 [2024-11-22 08:36:00.343687] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:15:25.422 [2024-11-22 08:36:00.343799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70909 ] 00:15:25.682 [2024-11-22 08:36:00.522932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.682 [2024-11-22 08:36:00.627070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.942 Running I/O for 5 seconds... 00:15:28.255 42549.00 IOPS, 166.21 MiB/s [2024-11-22T08:36:04.278Z] 42579.50 IOPS, 166.33 MiB/s [2024-11-22T08:36:05.215Z] 43146.00 IOPS, 168.54 MiB/s [2024-11-22T08:36:06.152Z] 43429.75 IOPS, 169.65 MiB/s [2024-11-22T08:36:06.152Z] 44174.00 IOPS, 172.55 MiB/s 00:15:31.065 Latency(us) 00:15:31.065 [2024-11-22T08:36:06.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.065 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:31.065 xnvme_bdev : 5.00 44136.77 172.41 0.00 0.00 1446.20 166.14 4948.10 00:15:31.065 [2024-11-22T08:36:06.153Z] =================================================================================================================== 00:15:31.066 [2024-11-22T08:36:06.153Z] Total : 44136.77 172.41 0.00 0.00 1446.20 166.14 4948.10 00:15:32.001 08:36:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:32.001 08:36:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:32.001 08:36:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:32.001 08:36:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:32.001 08:36:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:32.259 { 00:15:32.259 "subsystems": [ 00:15:32.259 { 00:15:32.259 "subsystem": "bdev", 00:15:32.259 "config": [ 00:15:32.259 { 00:15:32.259 "params": { 00:15:32.259 "io_mechanism": "libaio", 00:15:32.259 "conserve_cpu": true, 00:15:32.259 "filename": "/dev/nvme0n1", 00:15:32.259 "name": "xnvme_bdev" 00:15:32.259 }, 00:15:32.259 "method": "bdev_xnvme_create" 00:15:32.259 }, 00:15:32.259 { 00:15:32.259 "method": "bdev_wait_for_examine" 00:15:32.259 } 00:15:32.259 ] 00:15:32.259 } 00:15:32.259 ] 00:15:32.259 } 00:15:32.259 [2024-11-22 08:36:07.138147] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:15:32.260 [2024-11-22 08:36:07.138476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:15:32.260 [2024-11-22 08:36:07.319456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.518 [2024-11-22 08:36:07.421678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.777 Running I/O for 5 seconds... 00:15:35.090 49630.00 IOPS, 193.87 MiB/s [2024-11-22T08:36:11.115Z] 49658.50 IOPS, 193.98 MiB/s [2024-11-22T08:36:12.053Z] 48894.67 IOPS, 190.99 MiB/s [2024-11-22T08:36:12.990Z] 48029.75 IOPS, 187.62 MiB/s 00:15:37.903 Latency(us) 00:15:37.903 [2024-11-22T08:36:12.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.903 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:37.903 xnvme_bdev : 5.00 47694.48 186.31 0.00 0.00 1338.30 148.87 5158.66 00:15:37.903 [2024-11-22T08:36:12.990Z] =================================================================================================================== 00:15:37.903 [2024-11-22T08:36:12.990Z] Total : 47694.48 186.31 0.00 0.00 1338.30 148.87 5158.66 00:15:38.883 00:15:38.883 real 0m13.616s 00:15:38.883 user 0m4.844s 00:15:38.883 sys 0m5.954s 00:15:38.883 ************************************ 00:15:38.883 END TEST xnvme_bdevperf 00:15:38.883 ************************************ 00:15:38.883 08:36:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.883 08:36:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:38.883 08:36:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:38.883 08:36:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:38.883 08:36:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.883 08:36:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.143 ************************************ 00:15:39.143 START TEST xnvme_fio_plugin 00:15:39.143 ************************************ 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:39.143 08:36:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:39.143 { 00:15:39.143 "subsystems": [ 00:15:39.143 { 00:15:39.143 "subsystem": "bdev", 00:15:39.143 "config": [ 00:15:39.143 { 00:15:39.143 "params": { 00:15:39.143 "io_mechanism": "libaio", 00:15:39.143 "conserve_cpu": true, 00:15:39.143 "filename": "/dev/nvme0n1", 00:15:39.143 "name": "xnvme_bdev" 00:15:39.144 }, 00:15:39.144 "method": "bdev_xnvme_create" 00:15:39.144 }, 00:15:39.144 { 00:15:39.144 "method": "bdev_wait_for_examine" 00:15:39.144 } 00:15:39.144 ] 00:15:39.144 } 00:15:39.144 ] 00:15:39.144 } 00:15:39.144 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:39.144 fio-3.35 00:15:39.144 Starting 1 thread 00:15:45.716 00:15:45.716 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71115: Fri Nov 22 08:36:19 2024 00:15:45.716 read: IOPS=55.4k, BW=216MiB/s (227MB/s)(1082MiB/5001msec) 00:15:45.716 slat (usec): min=4, max=514, avg=15.56, stdev=22.04 00:15:45.716 clat (usec): min=51, max=5433, avg=713.15, stdev=444.25 00:15:45.716 lat (usec): min=94, max=5447, avg=728.71, stdev=447.45 00:15:45.716 clat percentiles (usec): 00:15:45.716 | 1.00th=[ 165], 5.00th=[ 249], 10.00th=[ 318], 20.00th=[ 420], 00:15:45.716 | 30.00th=[ 494], 40.00th=[ 570], 50.00th=[ 635], 60.00th=[ 709], 00:15:45.716 | 70.00th=[ 799], 80.00th=[ 914], 90.00th=[ 1090], 95.00th=[ 1336], 00:15:45.716 | 99.00th=[ 2704], 99.50th=[ 3326], 99.90th=[ 4293], 99.95th=[ 4490], 00:15:45.716 | 99.99th=[ 4883] 00:15:45.716 bw ( KiB/s): min=192480, max=245344, per=100.00%, avg=222217.44, stdev=14786.20, samples=9 00:15:45.716 iops : min=48120, max=61336, avg=55554.33, stdev=3696.55, samples=9 00:15:45.716 lat (usec) : 100=0.04%, 250=4.98%, 500=25.75%, 750=33.82%, 1000=21.10% 00:15:45.716 lat (msec) : 2=12.16%, 4=1.96%, 10=0.19% 00:15:45.716 cpu : usr=27.72%, sys=53.00%, ctx=93, majf=0, minf=764 00:15:45.717 IO depths : 1=0.1%, 2=0.9%, 4=3.1%, 8=9.0%, 16=24.8%, 32=60.1%, >=64=2.0% 00:15:45.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.717 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:15:45.717 issued rwts: total=276945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.717 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:45.717 00:15:45.717 Run status group 0 (all jobs): 00:15:45.717 READ: bw=216MiB/s (227MB/s), 216MiB/s-216MiB/s (227MB/s-227MB/s), io=1082MiB (1134MB), run=5001-5001msec 00:15:46.287 ----------------------------------------------------- 00:15:46.287 Suppressions used: 00:15:46.287 count bytes template 00:15:46.287 1 11 /usr/src/fio/parse.c 00:15:46.287 1 8 libtcmalloc_minimal.so 00:15:46.287 1 904 libcrypto.so 00:15:46.287 ----------------------------------------------------- 00:15:46.287 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:46.287 { 00:15:46.287 "subsystems": [ 00:15:46.287 { 00:15:46.287 "subsystem": "bdev", 00:15:46.287 "config": [ 00:15:46.287 { 00:15:46.287 "params": { 00:15:46.287 "io_mechanism": "libaio", 00:15:46.287 "conserve_cpu": true, 00:15:46.287 "filename": "/dev/nvme0n1", 00:15:46.287 "name": "xnvme_bdev" 00:15:46.287 }, 00:15:46.287 "method": "bdev_xnvme_create" 00:15:46.287 }, 00:15:46.287 { 00:15:46.287 "method": "bdev_wait_for_examine" 00:15:46.287 } 00:15:46.287 ] 00:15:46.287 } 00:15:46.287 ] 00:15:46.287 } 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:46.287 08:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:46.547 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:46.547 fio-3.35 00:15:46.547 Starting 1 thread 00:15:53.123 00:15:53.123 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71209: Fri Nov 22 08:36:27 2024 00:15:53.123 write: IOPS=54.8k, BW=214MiB/s (224MB/s)(1070MiB/5001msec); 0 zone resets 00:15:53.123 slat (usec): min=4, max=3893, avg=15.55, stdev=25.75 00:15:53.123 clat (usec): min=57, max=5284, avg=735.36, stdev=448.77 00:15:53.123 lat (usec): min=82, max=6165, avg=750.91, stdev=451.94 00:15:53.123 clat percentiles (usec): 00:15:53.123 | 1.00th=[ 180], 5.00th=[ 269], 10.00th=[ 343], 20.00th=[ 445], 00:15:53.123 | 30.00th=[ 523], 40.00th=[ 594], 50.00th=[ 660], 60.00th=[ 734], 00:15:53.123 | 70.00th=[ 816], 80.00th=[ 930], 90.00th=[ 1106], 95.00th=[ 1336], 00:15:53.123 | 99.00th=[ 2802], 99.50th=[ 3425], 99.90th=[ 4424], 99.95th=[ 4686], 00:15:53.123 | 99.99th=[ 5014] 00:15:53.123 bw ( KiB/s): min=191168, max=271992, per=100.00%, avg=221571.67, stdev=28173.50, samples=9 00:15:53.123 iops : min=47792, max=67998, avg=55392.89, stdev=7043.37, samples=9 00:15:53.123 lat (usec) : 100=0.04%, 250=3.99%, 500=22.71%, 750=35.15%, 1000=23.12% 00:15:53.123 lat (msec) : 2=12.81%, 4=1.95%, 10=0.24% 00:15:53.123 cpu : usr=29.16%, sys=53.04%, ctx=96, majf=0, minf=764 00:15:53.123 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=9.0%, 16=24.5%, 32=60.4%, >=64=2.0% 00:15:53.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.123 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:15:53.123 issued rwts: total=0,273987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:53.123 00:15:53.123 Run status group 0 (all jobs): 00:15:53.123 WRITE: bw=214MiB/s (224MB/s), 214MiB/s-214MiB/s (224MB/s-224MB/s), io=1070MiB (1122MB), run=5001-5001msec 00:15:53.691 ----------------------------------------------------- 00:15:53.691 Suppressions used: 00:15:53.691 count bytes template 00:15:53.691 1 11 /usr/src/fio/parse.c 00:15:53.691 1 8 libtcmalloc_minimal.so 00:15:53.691 1 904 libcrypto.so 00:15:53.691 ----------------------------------------------------- 00:15:53.691 00:15:53.691 00:15:53.691 real 0m14.751s 00:15:53.691 user 0m6.536s 00:15:53.691 sys 0m6.033s 00:15:53.691 08:36:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.691 ************************************ 00:15:53.691 END TEST xnvme_fio_plugin 00:15:53.691 ************************************ 00:15:53.691 08:36:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:53.691 08:36:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:53.691 08:36:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.691 08:36:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.691 08:36:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 ************************************ 00:15:53.691 START TEST xnvme_rpc 00:15:53.691 ************************************ 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71295 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71295 00:15:53.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71295 ']' 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.691 08:36:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.950 [2024-11-22 08:36:28.872264] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:15:53.950 [2024-11-22 08:36:28.872631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71295 ] 00:15:54.210 [2024-11-22 08:36:29.054358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.210 [2024-11-22 08:36:29.166414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 xnvme_bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71295 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71295 ']' 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71295 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71295 00:15:55.409 killing process with pid 71295 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71295' 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71295 00:15:55.409 08:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71295 00:15:57.975 00:15:57.975 real 0m3.799s 00:15:57.975 user 0m3.876s 00:15:57.975 sys 0m0.539s 00:15:57.975 08:36:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.975 ************************************ 00:15:57.975 END TEST xnvme_rpc 00:15:57.975 ************************************ 00:15:57.975 08:36:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 08:36:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:57.975 08:36:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.975 08:36:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.975 08:36:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 ************************************ 00:15:57.975 START TEST xnvme_bdevperf 00:15:57.975 ************************************ 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:57.975 08:36:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 { 00:15:57.975 "subsystems": [ 00:15:57.975 { 00:15:57.975 "subsystem": "bdev", 00:15:57.975 "config": [ 00:15:57.975 { 00:15:57.975 "params": { 00:15:57.975 "io_mechanism": "io_uring", 00:15:57.975 "conserve_cpu": false, 00:15:57.975 "filename": "/dev/nvme0n1", 00:15:57.975 "name": "xnvme_bdev" 00:15:57.975 }, 00:15:57.975 "method": "bdev_xnvme_create" 00:15:57.975 }, 00:15:57.975 { 00:15:57.975 "method": "bdev_wait_for_examine" 00:15:57.975 } 00:15:57.975 ] 00:15:57.975 } 00:15:57.975 ] 00:15:57.975 } 00:15:57.975 [2024-11-22 08:36:32.728151] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:15:57.975 [2024-11-22 08:36:32.728275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71380 ] 00:15:57.975 [2024-11-22 08:36:32.907687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.975 [2024-11-22 08:36:33.016442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.544 Running I/O for 5 seconds... 00:16:00.421 54144.00 IOPS, 211.50 MiB/s [2024-11-22T08:36:36.445Z] 53984.00 IOPS, 210.88 MiB/s [2024-11-22T08:36:37.382Z] 46208.00 IOPS, 180.50 MiB/s [2024-11-22T08:36:38.761Z] 43008.00 IOPS, 168.00 MiB/s 00:16:03.674 Latency(us) 00:16:03.674 [2024-11-22T08:36:38.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.674 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:03.674 xnvme_bdev : 5.00 41082.93 160.48 0.00 0.00 1553.65 789.59 7948.54 00:16:03.674 [2024-11-22T08:36:38.761Z] =================================================================================================================== 00:16:03.674 [2024-11-22T08:36:38.761Z] Total : 41082.93 160.48 0.00 0.00 1553.65 789.59 7948.54 00:16:04.613 08:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:04.613 08:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:04.613 08:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:04.613 08:36:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:04.613 08:36:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:04.613 { 00:16:04.613 "subsystems": [ 00:16:04.613 { 00:16:04.613 "subsystem": "bdev", 00:16:04.613 "config": [ 00:16:04.613 { 00:16:04.613 "params": { 00:16:04.613 "io_mechanism": "io_uring", 00:16:04.613 "conserve_cpu": false, 00:16:04.613 "filename": "/dev/nvme0n1", 00:16:04.613 "name": "xnvme_bdev" 00:16:04.613 }, 00:16:04.613 "method": "bdev_xnvme_create" 00:16:04.613 }, 00:16:04.613 { 00:16:04.613 "method": "bdev_wait_for_examine" 00:16:04.613 } 00:16:04.613 ] 00:16:04.613 } 00:16:04.613 ] 00:16:04.613 } 00:16:04.613 [2024-11-22 08:36:39.475015] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:16:04.613 [2024-11-22 08:36:39.475287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:16:04.613 [2024-11-22 08:36:39.654691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.873 [2024-11-22 08:36:39.753348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.132 Running I/O for 5 seconds... 00:16:07.012 24064.00 IOPS, 94.00 MiB/s [2024-11-22T08:36:43.478Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-22T08:36:44.417Z] 24405.33 IOPS, 95.33 MiB/s [2024-11-22T08:36:45.353Z] 25312.00 IOPS, 98.88 MiB/s [2024-11-22T08:36:45.353Z] 25446.40 IOPS, 99.40 MiB/s 00:16:10.266 Latency(us) 00:16:10.266 [2024-11-22T08:36:45.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.266 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:10.266 xnvme_bdev : 5.01 25405.59 99.24 0.00 0.00 2511.24 1460.74 8106.46 00:16:10.266 [2024-11-22T08:36:45.353Z] =================================================================================================================== 00:16:10.266 [2024-11-22T08:36:45.353Z] Total : 25405.59 99.24 0.00 0.00 2511.24 1460.74 8106.46 00:16:11.205 ************************************ 00:16:11.205 END TEST xnvme_bdevperf 00:16:11.205 ************************************ 00:16:11.205 00:16:11.205 real 0m13.512s 00:16:11.205 user 0m6.370s 00:16:11.205 sys 0m6.912s 00:16:11.205 08:36:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.205 08:36:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.205 08:36:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:11.205 08:36:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:11.205 08:36:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.205 08:36:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.205 ************************************ 00:16:11.205 START TEST xnvme_fio_plugin 00:16:11.205 ************************************ 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:11.205 08:36:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.205 { 00:16:11.205 "subsystems": [ 00:16:11.205 { 00:16:11.205 "subsystem": "bdev", 00:16:11.205 "config": [ 00:16:11.205 { 00:16:11.205 "params": { 00:16:11.205 "io_mechanism": "io_uring", 00:16:11.205 "conserve_cpu": false, 00:16:11.205 "filename": "/dev/nvme0n1", 00:16:11.205 "name": "xnvme_bdev" 00:16:11.205 }, 00:16:11.205 "method": "bdev_xnvme_create" 00:16:11.205 }, 00:16:11.205 { 00:16:11.205 "method": "bdev_wait_for_examine" 00:16:11.205 } 00:16:11.205 ] 00:16:11.205 } 00:16:11.205 ] 00:16:11.205 } 00:16:11.464 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:11.464 fio-3.35 00:16:11.464 Starting 1 thread 00:16:18.072 00:16:18.072 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71580: Fri Nov 22 08:36:52 2024 00:16:18.072 read: IOPS=28.4k, BW=111MiB/s (116MB/s)(556MiB/5001msec) 00:16:18.072 slat (usec): min=2, max=103, avg= 6.07, stdev= 2.45 00:16:18.072 clat (usec): min=934, max=3571, avg=2010.39, stdev=298.53 00:16:18.072 lat (usec): min=942, max=3583, avg=2016.45, stdev=299.74 00:16:18.072 clat percentiles (usec): 00:16:18.072 | 1.00th=[ 1450], 5.00th=[ 1663], 10.00th=[ 1713], 20.00th=[ 1778], 00:16:18.072 | 30.00th=[ 1827], 40.00th=[ 1893], 50.00th=[ 1942], 60.00th=[ 2008], 00:16:18.072 | 70.00th=[ 2089], 80.00th=[ 2245], 90.00th=[ 2474], 95.00th=[ 2638], 00:16:18.072 | 99.00th=[ 2835], 99.50th=[ 2900], 99.90th=[ 3163], 99.95th=[ 3294], 00:16:18.072 | 99.99th=[ 3425] 00:16:18.072 bw ( KiB/s): min=90443, max=126464, per=100.00%, avg=113815.60, stdev=12189.46, samples=10 00:16:18.072 iops : min=22610, max=31616, avg=28453.70, stdev=3047.47, samples=10 00:16:18.072 lat (usec) : 1000=0.01% 00:16:18.072 lat (msec) : 2=59.95%, 4=40.05% 00:16:18.072 cpu : usr=31.64%, sys=67.22%, ctx=12, majf=0, minf=762 00:16:18.072 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:18.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.072 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:18.072 issued rwts: total=142229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.072 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:18.072 00:16:18.072 Run status group 0 (all jobs): 00:16:18.072 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=556MiB (583MB), run=5001-5001msec 00:16:18.642 ----------------------------------------------------- 00:16:18.642 Suppressions used: 00:16:18.642 count bytes template 00:16:18.642 1 11 /usr/src/fio/parse.c 00:16:18.642 1 8 libtcmalloc_minimal.so 00:16:18.642 1 904 libcrypto.so 00:16:18.642 ----------------------------------------------------- 00:16:18.642 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:18.642 { 00:16:18.642 "subsystems": [ 00:16:18.642 { 00:16:18.642 "subsystem": "bdev", 00:16:18.642 "config": [ 00:16:18.642 { 00:16:18.642 "params": { 00:16:18.642 "io_mechanism": "io_uring", 00:16:18.642 "conserve_cpu": false, 00:16:18.642 "filename": "/dev/nvme0n1", 00:16:18.642 "name": "xnvme_bdev" 00:16:18.642 }, 00:16:18.642 "method": "bdev_xnvme_create" 00:16:18.642 }, 00:16:18.642 { 00:16:18.642 "method": "bdev_wait_for_examine" 00:16:18.642 } 00:16:18.642 ] 00:16:18.642 } 00:16:18.642 ] 00:16:18.642 } 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:18.642 08:36:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.902 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:18.902 fio-3.35 00:16:18.902 Starting 1 thread 00:16:25.475 00:16:25.475 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71673: Fri Nov 22 08:36:59 2024 00:16:25.475 write: IOPS=28.2k, BW=110MiB/s (116MB/s)(552MiB/5001msec); 0 zone resets 00:16:25.475 slat (usec): min=2, max=178, avg= 6.17, stdev= 2.56 00:16:25.475 clat (usec): min=688, max=4329, avg=2020.85, stdev=294.58 00:16:25.475 lat (usec): min=691, max=4341, avg=2027.03, stdev=295.74 00:16:25.475 clat percentiles (usec): 00:16:25.475 | 1.00th=[ 1434], 5.00th=[ 1663], 10.00th=[ 1729], 20.00th=[ 1795], 00:16:25.475 | 30.00th=[ 1860], 40.00th=[ 1909], 50.00th=[ 1975], 60.00th=[ 2024], 00:16:25.475 | 70.00th=[ 2114], 80.00th=[ 2245], 90.00th=[ 2442], 95.00th=[ 2606], 00:16:25.475 | 99.00th=[ 2802], 99.50th=[ 2900], 99.90th=[ 3163], 99.95th=[ 3523], 00:16:25.475 | 99.99th=[ 4228] 00:16:25.475 bw ( KiB/s): min=99840, max=124928, per=100.00%, avg=114972.44, stdev=9127.08, samples=9 00:16:25.475 iops : min=24960, max=31232, avg=28743.11, stdev=2281.77, samples=9 00:16:25.475 lat (usec) : 750=0.01%, 1000=0.19% 00:16:25.475 lat (msec) : 2=55.65%, 4=44.13%, 10=0.03% 00:16:25.475 cpu : usr=32.44%, sys=66.20%, ctx=16, majf=0, minf=762 00:16:25.475 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:16:25.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.475 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:25.475 issued rwts: total=0,141248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.475 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.475 00:16:25.475 Run status group 0 (all jobs): 00:16:25.475 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=552MiB (579MB), run=5001-5001msec 00:16:25.734 ----------------------------------------------------- 00:16:25.734 Suppressions used: 00:16:25.734 count bytes template 00:16:25.734 1 11 /usr/src/fio/parse.c 00:16:25.734 1 8 libtcmalloc_minimal.so 00:16:25.734 1 904 libcrypto.so 00:16:25.734 ----------------------------------------------------- 00:16:25.734 00:16:25.993 00:16:25.993 real 0m14.598s 00:16:25.993 user 0m6.833s 00:16:25.993 sys 0m7.375s 00:16:25.993 08:37:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.993 08:37:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:25.993 ************************************ 00:16:25.993 END TEST xnvme_fio_plugin 00:16:25.993 ************************************ 00:16:25.993 08:37:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:25.993 08:37:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:25.993 08:37:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:25.993 08:37:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:25.993 08:37:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:25.993 08:37:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.993 08:37:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.993 ************************************ 00:16:25.993 START TEST xnvme_rpc 00:16:25.993 ************************************ 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71759 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71759 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71759 ']' 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.993 08:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.993 [2024-11-22 08:37:01.018335] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:16:25.993 [2024-11-22 08:37:01.018470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:16:26.253 [2024-11-22 08:37:01.199973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.253 [2024-11-22 08:37:01.306999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 xnvme_bdev 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.191 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:27.192 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:27.192 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:27.192 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:27.192 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.192 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71759 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71759 ']' 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71759 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71759 00:16:27.451 killing process with pid 71759 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71759' 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71759 00:16:27.451 08:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71759 00:16:29.990 ************************************ 00:16:29.990 END TEST xnvme_rpc 00:16:29.990 ************************************ 00:16:29.990 00:16:29.990 real 0m3.742s 00:16:29.990 user 0m3.829s 00:16:29.990 sys 0m0.531s 00:16:29.990 08:37:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.990 08:37:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.990 08:37:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:29.990 08:37:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.990 08:37:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.990 08:37:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.990 ************************************ 00:16:29.990 START TEST xnvme_bdevperf 00:16:29.990 ************************************ 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:29.991 08:37:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:29.991 { 00:16:29.991 "subsystems": [ 00:16:29.991 { 00:16:29.991 "subsystem": "bdev", 00:16:29.991 "config": [ 00:16:29.991 { 00:16:29.991 "params": { 00:16:29.991 "io_mechanism": "io_uring", 00:16:29.991 "conserve_cpu": true, 00:16:29.991 "filename": "/dev/nvme0n1", 00:16:29.991 "name": "xnvme_bdev" 00:16:29.991 }, 00:16:29.991 "method": "bdev_xnvme_create" 00:16:29.991 }, 00:16:29.991 { 00:16:29.991 "method": "bdev_wait_for_examine" 00:16:29.991 } 00:16:29.991 ] 00:16:29.991 } 00:16:29.991 ] 00:16:29.991 } 00:16:29.991 [2024-11-22 08:37:04.810365] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:16:29.991 [2024-11-22 08:37:04.810648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71839 ] 00:16:29.991 [2024-11-22 08:37:04.993175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.251 [2024-11-22 08:37:05.105750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.511 Running I/O for 5 seconds... 00:16:32.414 41344.00 IOPS, 161.50 MiB/s [2024-11-22T08:37:08.880Z] 51712.00 IOPS, 202.00 MiB/s [2024-11-22T08:37:09.448Z] 47722.67 IOPS, 186.42 MiB/s [2024-11-22T08:37:10.827Z] 46048.00 IOPS, 179.88 MiB/s [2024-11-22T08:37:10.827Z] 44723.20 IOPS, 174.70 MiB/s 00:16:35.740 Latency(us) 00:16:35.740 [2024-11-22T08:37:10.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.740 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:35.740 xnvme_bdev : 5.00 44700.67 174.61 0.00 0.00 1427.96 750.11 4053.23 00:16:35.740 [2024-11-22T08:37:10.827Z] =================================================================================================================== 00:16:35.740 [2024-11-22T08:37:10.827Z] Total : 44700.67 174.61 0.00 0.00 1427.96 750.11 4053.23 00:16:36.678 08:37:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.678 08:37:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:36.678 08:37:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:36.678 08:37:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:36.678 08:37:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.678 { 00:16:36.678 "subsystems": [ 00:16:36.678 { 00:16:36.678 "subsystem": "bdev", 00:16:36.678 "config": [ 00:16:36.678 { 00:16:36.678 "params": { 00:16:36.678 "io_mechanism": "io_uring", 00:16:36.678 "conserve_cpu": true, 00:16:36.678 "filename": "/dev/nvme0n1", 00:16:36.678 "name": "xnvme_bdev" 00:16:36.678 }, 00:16:36.678 "method": "bdev_xnvme_create" 00:16:36.678 }, 00:16:36.678 { 00:16:36.678 "method": "bdev_wait_for_examine" 00:16:36.678 } 00:16:36.678 ] 00:16:36.678 } 00:16:36.678 ] 00:16:36.678 } 00:16:36.678 [2024-11-22 08:37:11.595513] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:16:36.678 [2024-11-22 08:37:11.595637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:16:36.937 [2024-11-22 08:37:11.775121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.937 [2024-11-22 08:37:11.890945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.196 Running I/O for 5 seconds... 00:16:39.145 29632.00 IOPS, 115.75 MiB/s [2024-11-22T08:37:15.611Z] 26111.50 IOPS, 102.00 MiB/s [2024-11-22T08:37:16.548Z] 24981.00 IOPS, 97.58 MiB/s [2024-11-22T08:37:17.485Z] 24639.75 IOPS, 96.25 MiB/s 00:16:42.398 Latency(us) 00:16:42.398 [2024-11-22T08:37:17.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.398 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:42.398 xnvme_bdev : 5.00 24448.97 95.50 0.00 0.00 2609.56 1269.92 8369.66 00:16:42.399 [2024-11-22T08:37:17.486Z] =================================================================================================================== 00:16:42.399 [2024-11-22T08:37:17.486Z] Total : 24448.97 95.50 0.00 0.00 2609.56 1269.92 8369.66 00:16:43.336 00:16:43.336 real 0m13.569s 00:16:43.336 user 0m7.127s 00:16:43.336 sys 0m5.915s 00:16:43.336 08:37:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.336 08:37:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:43.336 ************************************ 00:16:43.336 END TEST xnvme_bdevperf 00:16:43.336 ************************************ 00:16:43.336 08:37:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:43.336 08:37:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.336 08:37:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.336 08:37:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.336 ************************************ 00:16:43.336 START TEST xnvme_fio_plugin 00:16:43.336 ************************************ 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:43.336 08:37:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.595 { 00:16:43.595 "subsystems": [ 00:16:43.595 { 00:16:43.595 "subsystem": "bdev", 00:16:43.595 "config": [ 00:16:43.595 { 00:16:43.595 "params": { 00:16:43.595 "io_mechanism": "io_uring", 00:16:43.595 "conserve_cpu": true, 00:16:43.595 "filename": "/dev/nvme0n1", 00:16:43.595 "name": "xnvme_bdev" 00:16:43.595 }, 00:16:43.595 "method": "bdev_xnvme_create" 00:16:43.595 }, 00:16:43.595 { 00:16:43.595 "method": "bdev_wait_for_examine" 00:16:43.595 } 00:16:43.595 ] 00:16:43.595 } 00:16:43.595 ] 00:16:43.595 } 00:16:43.595 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:43.595 fio-3.35 00:16:43.595 Starting 1 thread 00:16:50.236 00:16:50.236 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72039: Fri Nov 22 08:37:24 2024 00:16:50.236 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5001msec) 00:16:50.236 slat (usec): min=3, max=105, avg= 4.62, stdev= 1.37 00:16:50.236 clat (usec): min=1248, max=3459, avg=1660.06, stdev=193.84 00:16:50.236 lat (usec): min=1252, max=3466, avg=1664.68, stdev=194.37 00:16:50.236 clat percentiles (usec): 00:16:50.236 | 1.00th=[ 1369], 5.00th=[ 1418], 10.00th=[ 1467], 20.00th=[ 1516], 00:16:50.236 | 30.00th=[ 1549], 40.00th=[ 1598], 50.00th=[ 1631], 60.00th=[ 1663], 00:16:50.236 | 70.00th=[ 1713], 80.00th=[ 1778], 90.00th=[ 1893], 95.00th=[ 2024], 00:16:50.236 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 2900], 00:16:50.236 | 99.99th=[ 3359] 00:16:50.236 bw ( KiB/s): min=124928, max=147456, per=99.54%, avg=138183.11, stdev=8320.77, samples=9 00:16:50.236 iops : min=31232, max=36864, avg=34545.78, stdev=2080.19, samples=9 00:16:50.236 lat (msec) : 2=94.43%, 4=5.57% 00:16:50.236 cpu : usr=48.22%, sys=48.36%, ctx=13, majf=0, minf=762 00:16:50.236 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:50.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.236 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:50.236 issued rwts: total=173568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.236 00:16:50.236 Run status group 0 (all jobs): 00:16:50.236 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5001-5001msec 00:16:50.805 ----------------------------------------------------- 00:16:50.806 Suppressions used: 00:16:50.806 count bytes template 00:16:50.806 1 11 /usr/src/fio/parse.c 00:16:50.806 1 8 libtcmalloc_minimal.so 00:16:50.806 1 904 libcrypto.so 00:16:50.806 ----------------------------------------------------- 00:16:50.806 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:50.806 08:37:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.806 { 00:16:50.806 "subsystems": [ 00:16:50.806 { 00:16:50.806 "subsystem": "bdev", 00:16:50.806 "config": [ 00:16:50.806 { 00:16:50.806 "params": { 00:16:50.806 "io_mechanism": "io_uring", 00:16:50.806 "conserve_cpu": true, 00:16:50.806 "filename": "/dev/nvme0n1", 00:16:50.806 "name": "xnvme_bdev" 00:16:50.806 }, 00:16:50.806 "method": "bdev_xnvme_create" 00:16:50.806 }, 00:16:50.806 { 00:16:50.806 "method": "bdev_wait_for_examine" 00:16:50.806 } 00:16:50.806 ] 00:16:50.806 } 00:16:50.806 ] 00:16:50.806 } 00:16:51.064 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:51.064 fio-3.35 00:16:51.064 Starting 1 thread 00:16:57.681 00:16:57.681 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72135: Fri Nov 22 08:37:31 2024 00:16:57.681 write: IOPS=35.0k, BW=137MiB/s (143MB/s)(684MiB/5001msec); 0 zone resets 00:16:57.681 slat (usec): min=3, max=122, avg= 4.72, stdev= 1.69 00:16:57.681 clat (usec): min=1214, max=3306, avg=1640.80, stdev=232.33 00:16:57.681 lat (usec): min=1218, max=3341, avg=1645.52, stdev=233.05 00:16:57.681 clat percentiles (usec): 00:16:57.681 | 1.00th=[ 1319], 5.00th=[ 1385], 10.00th=[ 1418], 20.00th=[ 1467], 00:16:57.681 | 30.00th=[ 1516], 40.00th=[ 1549], 50.00th=[ 1598], 60.00th=[ 1631], 00:16:57.681 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1942], 95.00th=[ 2114], 00:16:57.681 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 2802], 99.95th=[ 2868], 00:16:57.681 | 99.99th=[ 3163] 00:16:57.681 bw ( KiB/s): min=113152, max=155136, per=99.88%, avg=139832.89, stdev=13142.16, samples=9 00:16:57.681 iops : min=28288, max=38784, avg=34958.22, stdev=3285.54, samples=9 00:16:57.681 lat (msec) : 2=92.32%, 4=7.68% 00:16:57.681 cpu : usr=49.08%, sys=47.70%, ctx=9, majf=0, minf=762 00:16:57.681 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:57.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.681 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:57.681 issued rwts: total=0,175040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.681 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:57.681 00:16:57.681 Run status group 0 (all jobs): 00:16:57.681 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=684MiB (717MB), run=5001-5001msec 00:16:57.941 ----------------------------------------------------- 00:16:57.941 Suppressions used: 00:16:57.941 count bytes template 00:16:57.941 1 11 /usr/src/fio/parse.c 00:16:57.941 1 8 libtcmalloc_minimal.so 00:16:57.941 1 904 libcrypto.so 00:16:57.941 ----------------------------------------------------- 00:16:57.941 00:16:57.941 ************************************ 00:16:57.941 END TEST xnvme_fio_plugin 00:16:57.941 ************************************ 00:16:57.941 00:16:57.941 real 0m14.539s 00:16:57.941 user 0m8.448s 00:16:57.941 sys 0m5.514s 00:16:57.941 08:37:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.941 08:37:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:57.941 08:37:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:57.941 08:37:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.941 08:37:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.941 08:37:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.941 ************************************ 00:16:57.941 START TEST xnvme_rpc 00:16:57.941 ************************************ 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72217 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72217 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72217 ']' 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.941 08:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.201 [2024-11-22 08:37:33.084290] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:16:58.201 [2024-11-22 08:37:33.084429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72217 ] 00:16:58.201 [2024-11-22 08:37:33.261856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.460 [2024-11-22 08:37:33.371060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 xnvme_bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72217 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72217 ']' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72217 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72217 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.399 killing process with pid 72217 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72217' 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72217 00:16:59.399 08:37:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72217 00:17:01.977 00:17:01.977 real 0m3.712s 00:17:01.977 user 0m3.805s 00:17:01.977 sys 0m0.505s 00:17:01.977 08:37:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.977 ************************************ 00:17:01.977 END TEST xnvme_rpc 00:17:01.977 ************************************ 00:17:01.977 08:37:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.977 08:37:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:01.977 08:37:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:01.977 08:37:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.977 08:37:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:01.977 ************************************ 00:17:01.977 START TEST xnvme_bdevperf 00:17:01.977 ************************************ 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:01.977 08:37:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:01.977 { 00:17:01.977 "subsystems": [ 00:17:01.977 { 00:17:01.977 "subsystem": "bdev", 00:17:01.977 "config": [ 00:17:01.977 { 00:17:01.977 "params": { 00:17:01.977 "io_mechanism": "io_uring_cmd", 00:17:01.977 "conserve_cpu": false, 00:17:01.977 "filename": "/dev/ng0n1", 00:17:01.977 "name": "xnvme_bdev" 00:17:01.977 }, 00:17:01.977 "method": "bdev_xnvme_create" 00:17:01.977 }, 00:17:01.977 { 00:17:01.977 "method": "bdev_wait_for_examine" 00:17:01.977 } 00:17:01.977 ] 00:17:01.977 } 00:17:01.977 ] 00:17:01.977 } 00:17:01.977 [2024-11-22 08:37:36.852997] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:01.977 [2024-11-22 08:37:36.853246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72302 ] 00:17:01.977 [2024-11-22 08:37:37.033632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.236 [2024-11-22 08:37:37.137208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.497 Running I/O for 5 seconds... 00:17:04.370 28352.00 IOPS, 110.75 MiB/s [2024-11-22T08:37:40.836Z] 26400.00 IOPS, 103.12 MiB/s [2024-11-22T08:37:41.774Z] 26282.67 IOPS, 102.67 MiB/s [2024-11-22T08:37:42.712Z] 25552.00 IOPS, 99.81 MiB/s [2024-11-22T08:37:42.712Z] 25305.60 IOPS, 98.85 MiB/s 00:17:07.625 Latency(us) 00:17:07.625 [2024-11-22T08:37:42.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.625 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:07.625 xnvme_bdev : 5.01 25254.91 98.65 0.00 0.00 2526.31 1151.49 8264.38 00:17:07.625 [2024-11-22T08:37:42.712Z] =================================================================================================================== 00:17:07.625 [2024-11-22T08:37:42.712Z] Total : 25254.91 98.65 0.00 0.00 2526.31 1151.49 8264.38 00:17:08.562 08:37:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:08.562 08:37:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:08.562 08:37:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:08.562 08:37:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:08.562 08:37:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:08.562 { 00:17:08.562 "subsystems": [ 00:17:08.562 { 00:17:08.562 "subsystem": "bdev", 00:17:08.562 "config": [ 00:17:08.562 { 00:17:08.562 "params": { 00:17:08.562 "io_mechanism": "io_uring_cmd", 00:17:08.562 "conserve_cpu": false, 00:17:08.562 "filename": "/dev/ng0n1", 00:17:08.562 "name": "xnvme_bdev" 00:17:08.562 }, 00:17:08.562 "method": "bdev_xnvme_create" 00:17:08.562 }, 00:17:08.562 { 00:17:08.562 "method": "bdev_wait_for_examine" 00:17:08.562 } 00:17:08.562 ] 00:17:08.562 } 00:17:08.562 ] 00:17:08.562 } 00:17:08.562 [2024-11-22 08:37:43.589821] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:08.562 [2024-11-22 08:37:43.590092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72376 ] 00:17:08.822 [2024-11-22 08:37:43.770046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.822 [2024-11-22 08:37:43.871080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.391 Running I/O for 5 seconds... 00:17:11.267 24832.00 IOPS, 97.00 MiB/s [2024-11-22T08:37:47.292Z] 24320.00 IOPS, 95.00 MiB/s [2024-11-22T08:37:48.229Z] 23957.33 IOPS, 93.58 MiB/s [2024-11-22T08:37:49.213Z] 23568.00 IOPS, 92.06 MiB/s 00:17:14.126 Latency(us) 00:17:14.126 [2024-11-22T08:37:49.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.126 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:14.126 xnvme_bdev : 5.01 23267.44 90.89 0.00 0.00 2741.84 1072.53 7948.54 00:17:14.126 [2024-11-22T08:37:49.213Z] =================================================================================================================== 00:17:14.126 [2024-11-22T08:37:49.213Z] Total : 23267.44 90.89 0.00 0.00 2741.84 1072.53 7948.54 00:17:15.503 08:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:15.503 08:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:15.503 08:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:15.503 08:37:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:15.503 08:37:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.503 { 00:17:15.503 "subsystems": [ 00:17:15.503 { 00:17:15.503 "subsystem": "bdev", 00:17:15.503 "config": [ 00:17:15.503 { 00:17:15.503 "params": { 00:17:15.503 "io_mechanism": "io_uring_cmd", 00:17:15.503 "conserve_cpu": false, 00:17:15.503 "filename": "/dev/ng0n1", 00:17:15.503 "name": "xnvme_bdev" 00:17:15.503 }, 00:17:15.503 "method": "bdev_xnvme_create" 00:17:15.503 }, 00:17:15.503 { 00:17:15.503 "method": "bdev_wait_for_examine" 00:17:15.503 } 00:17:15.503 ] 00:17:15.503 } 00:17:15.503 ] 00:17:15.503 } 00:17:15.503 [2024-11-22 08:37:50.338292] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:15.503 [2024-11-22 08:37:50.338426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72455 ] 00:17:15.503 [2024-11-22 08:37:50.518860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.763 [2024-11-22 08:37:50.616373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.023 Running I/O for 5 seconds... 00:17:17.901 73472.00 IOPS, 287.00 MiB/s [2024-11-22T08:37:54.363Z] 73536.00 IOPS, 287.25 MiB/s [2024-11-22T08:37:55.301Z] 73493.33 IOPS, 287.08 MiB/s [2024-11-22T08:37:56.237Z] 73456.00 IOPS, 286.94 MiB/s [2024-11-22T08:37:56.237Z] 73497.60 IOPS, 287.10 MiB/s 00:17:21.150 Latency(us) 00:17:21.150 [2024-11-22T08:37:56.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.150 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:21.150 xnvme_bdev : 5.00 73484.75 287.05 0.00 0.00 868.09 641.54 2224.01 00:17:21.150 [2024-11-22T08:37:56.237Z] =================================================================================================================== 00:17:21.150 [2024-11-22T08:37:56.237Z] Total : 73484.75 287.05 0.00 0.00 868.09 641.54 2224.01 00:17:22.087 08:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:22.087 08:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:22.087 08:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:22.087 08:37:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:22.087 08:37:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:22.087 { 00:17:22.087 "subsystems": [ 00:17:22.087 { 00:17:22.087 "subsystem": "bdev", 00:17:22.087 "config": [ 00:17:22.087 { 00:17:22.088 "params": { 00:17:22.088 "io_mechanism": "io_uring_cmd", 00:17:22.088 "conserve_cpu": false, 00:17:22.088 "filename": "/dev/ng0n1", 00:17:22.088 "name": "xnvme_bdev" 00:17:22.088 }, 00:17:22.088 "method": "bdev_xnvme_create" 00:17:22.088 }, 00:17:22.088 { 00:17:22.088 "method": "bdev_wait_for_examine" 00:17:22.088 } 00:17:22.088 ] 00:17:22.088 } 00:17:22.088 ] 00:17:22.088 } 00:17:22.088 [2024-11-22 08:37:57.068210] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:22.088 [2024-11-22 08:37:57.068335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72535 ] 00:17:22.347 [2024-11-22 08:37:57.246241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.347 [2024-11-22 08:37:57.356087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.916 Running I/O for 5 seconds... 00:17:24.789 57017.00 IOPS, 222.72 MiB/s [2024-11-22T08:38:00.812Z] 49559.50 IOPS, 193.59 MiB/s [2024-11-22T08:38:01.748Z] 49549.67 IOPS, 193.55 MiB/s [2024-11-22T08:38:03.162Z] 48627.25 IOPS, 189.95 MiB/s [2024-11-22T08:38:03.162Z] 48622.20 IOPS, 189.93 MiB/s 00:17:28.075 Latency(us) 00:17:28.075 [2024-11-22T08:38:03.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.075 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:28.075 xnvme_bdev : 5.01 48558.91 189.68 0.00 0.00 1314.03 195.75 10317.31 00:17:28.075 [2024-11-22T08:38:03.162Z] =================================================================================================================== 00:17:28.075 [2024-11-22T08:38:03.162Z] Total : 48558.91 189.68 0.00 0.00 1314.03 195.75 10317.31 00:17:29.011 00:17:29.011 real 0m27.167s 00:17:29.011 user 0m14.061s 00:17:29.011 sys 0m12.650s 00:17:29.011 ************************************ 00:17:29.011 END TEST xnvme_bdevperf 00:17:29.011 ************************************ 00:17:29.011 08:38:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.011 08:38:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:29.011 08:38:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:29.011 08:38:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.011 08:38:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.011 08:38:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:29.011 ************************************ 00:17:29.011 START TEST xnvme_fio_plugin 00:17:29.011 ************************************ 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:29.011 08:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:29.011 { 00:17:29.011 "subsystems": [ 00:17:29.011 { 00:17:29.011 "subsystem": "bdev", 00:17:29.011 "config": [ 00:17:29.011 { 00:17:29.011 "params": { 00:17:29.011 "io_mechanism": "io_uring_cmd", 00:17:29.011 "conserve_cpu": false, 00:17:29.011 "filename": "/dev/ng0n1", 00:17:29.011 "name": "xnvme_bdev" 00:17:29.011 }, 00:17:29.011 "method": "bdev_xnvme_create" 00:17:29.011 }, 00:17:29.011 { 00:17:29.011 "method": "bdev_wait_for_examine" 00:17:29.011 } 00:17:29.011 ] 00:17:29.011 } 00:17:29.011 ] 00:17:29.011 } 00:17:29.270 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:29.270 fio-3.35 00:17:29.270 Starting 1 thread 00:17:35.843 00:17:35.843 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72656: Fri Nov 22 08:38:10 2024 00:17:35.843 read: IOPS=22.5k, BW=88.1MiB/s (92.3MB/s)(441MiB/5002msec) 00:17:35.843 slat (usec): min=3, max=162, avg= 8.14, stdev= 3.71 00:17:35.843 clat (usec): min=1656, max=4636, avg=2508.97, stdev=230.91 00:17:35.843 lat (usec): min=1660, max=4662, avg=2517.11, stdev=231.69 00:17:35.843 clat percentiles (usec): 00:17:35.843 | 1.00th=[ 1909], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2311], 00:17:35.843 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:17:35.843 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2868], 00:17:35.843 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3097], 99.95th=[ 4113], 00:17:35.843 | 99.99th=[ 4555] 00:17:35.843 bw ( KiB/s): min=84480, max=95552, per=100.00%, avg=90375.11, stdev=3340.35, samples=9 00:17:35.843 iops : min=21120, max=23888, avg=22593.78, stdev=835.09, samples=9 00:17:35.843 lat (msec) : 2=2.07%, 4=97.87%, 10=0.06% 00:17:35.843 cpu : usr=39.89%, sys=58.27%, ctx=14, majf=0, minf=762 00:17:35.843 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:35.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.843 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:35.843 issued rwts: total=112768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:35.843 00:17:35.843 Run status group 0 (all jobs): 00:17:35.843 READ: bw=88.1MiB/s (92.3MB/s), 88.1MiB/s-88.1MiB/s (92.3MB/s-92.3MB/s), io=441MiB (462MB), run=5002-5002msec 00:17:36.411 ----------------------------------------------------- 00:17:36.411 Suppressions used: 00:17:36.411 count bytes template 00:17:36.411 1 11 /usr/src/fio/parse.c 00:17:36.411 1 8 libtcmalloc_minimal.so 00:17:36.411 1 904 libcrypto.so 00:17:36.411 ----------------------------------------------------- 00:17:36.411 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:36.671 { 00:17:36.671 "subsystems": [ 00:17:36.671 { 00:17:36.671 "subsystem": "bdev", 00:17:36.671 "config": [ 00:17:36.671 { 00:17:36.671 "params": { 00:17:36.671 "io_mechanism": "io_uring_cmd", 00:17:36.671 "conserve_cpu": false, 00:17:36.671 "filename": "/dev/ng0n1", 00:17:36.671 "name": "xnvme_bdev" 00:17:36.671 }, 00:17:36.671 "method": "bdev_xnvme_create" 00:17:36.671 }, 00:17:36.671 { 00:17:36.671 "method": "bdev_wait_for_examine" 00:17:36.671 } 00:17:36.671 ] 00:17:36.671 } 00:17:36.671 ] 00:17:36.671 } 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:36.671 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:36.672 08:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:36.931 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:36.931 fio-3.35 00:17:36.931 Starting 1 thread 00:17:43.508 00:17:43.508 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72760: Fri Nov 22 08:38:17 2024 00:17:43.508 write: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(443MiB/5002msec); 0 zone resets 00:17:43.508 slat (nsec): min=2102, max=92613, avg=8322.99, stdev=3896.06 00:17:43.508 clat (usec): min=932, max=3346, avg=2485.91, stdev=321.93 00:17:43.509 lat (usec): min=935, max=3374, avg=2494.23, stdev=323.09 00:17:43.509 clat percentiles (usec): 00:17:43.509 | 1.00th=[ 1303], 5.00th=[ 1762], 10.00th=[ 2180], 20.00th=[ 2311], 00:17:43.509 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:17:43.509 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2868], 00:17:43.509 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3097], 99.95th=[ 3163], 00:17:43.509 | 99.99th=[ 3261] 00:17:43.509 bw ( KiB/s): min=86528, max=113152, per=100.00%, avg=91002.44, stdev=8433.03, samples=9 00:17:43.509 iops : min=21632, max=28288, avg=22750.56, stdev=2108.27, samples=9 00:17:43.509 lat (usec) : 1000=0.02% 00:17:43.509 lat (msec) : 2=6.32%, 4=93.67% 00:17:43.509 cpu : usr=40.06%, sys=58.28%, ctx=10, majf=0, minf=762 00:17:43.509 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:43.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.509 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:43.509 issued rwts: total=0,113408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:43.509 00:17:43.509 Run status group 0 (all jobs): 00:17:43.509 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=443MiB (465MB), run=5002-5002msec 00:17:44.078 ----------------------------------------------------- 00:17:44.078 Suppressions used: 00:17:44.078 count bytes template 00:17:44.078 1 11 /usr/src/fio/parse.c 00:17:44.078 1 8 libtcmalloc_minimal.so 00:17:44.078 1 904 libcrypto.so 00:17:44.078 ----------------------------------------------------- 00:17:44.078 00:17:44.078 00:17:44.078 real 0m15.138s 00:17:44.078 user 0m8.183s 00:17:44.078 sys 0m6.536s 00:17:44.078 ************************************ 00:17:44.078 END TEST xnvme_fio_plugin 00:17:44.078 ************************************ 00:17:44.078 08:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.078 08:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:44.337 08:38:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:44.337 08:38:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:44.337 08:38:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:44.337 08:38:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:44.337 08:38:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:44.337 08:38:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.337 08:38:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:44.337 ************************************ 00:17:44.337 START TEST xnvme_rpc 00:17:44.337 ************************************ 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72845 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72845 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72845 ']' 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.337 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.338 08:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.338 [2024-11-22 08:38:19.340288] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:44.338 [2024-11-22 08:38:19.340656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72845 ] 00:17:44.597 [2024-11-22 08:38:19.520736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.597 [2024-11-22 08:38:19.668632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 xnvme_bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72845 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72845 ']' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72845 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72845 00:17:45.977 killing process with pid 72845 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72845' 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72845 00:17:45.977 08:38:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72845 00:17:48.515 ************************************ 00:17:48.515 END TEST xnvme_rpc 00:17:48.515 ************************************ 00:17:48.515 00:17:48.515 real 0m4.365s 00:17:48.515 user 0m4.217s 00:17:48.515 sys 0m0.728s 00:17:48.515 08:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.515 08:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.775 08:38:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:48.775 08:38:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.775 08:38:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.775 08:38:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:48.775 ************************************ 00:17:48.775 START TEST xnvme_bdevperf 00:17:48.775 ************************************ 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:48.775 08:38:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:48.775 { 00:17:48.775 "subsystems": [ 00:17:48.775 { 00:17:48.775 "subsystem": "bdev", 00:17:48.775 "config": [ 00:17:48.775 { 00:17:48.775 "params": { 00:17:48.775 "io_mechanism": "io_uring_cmd", 00:17:48.775 "conserve_cpu": true, 00:17:48.775 "filename": "/dev/ng0n1", 00:17:48.775 "name": "xnvme_bdev" 00:17:48.775 }, 00:17:48.775 "method": "bdev_xnvme_create" 00:17:48.775 }, 00:17:48.775 { 00:17:48.775 "method": "bdev_wait_for_examine" 00:17:48.775 } 00:17:48.775 ] 00:17:48.775 } 00:17:48.775 ] 00:17:48.775 } 00:17:48.775 [2024-11-22 08:38:23.768346] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:48.775 [2024-11-22 08:38:23.768466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72932 ] 00:17:49.034 [2024-11-22 08:38:23.953069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.034 [2024-11-22 08:38:24.107236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.602 Running I/O for 5 seconds... 00:17:51.478 28352.00 IOPS, 110.75 MiB/s [2024-11-22T08:38:27.945Z] 26688.00 IOPS, 104.25 MiB/s [2024-11-22T08:38:28.885Z] 27157.33 IOPS, 106.08 MiB/s [2024-11-22T08:38:29.823Z] 25840.00 IOPS, 100.94 MiB/s 00:17:54.736 Latency(us) 00:17:54.736 [2024-11-22T08:38:29.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.736 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:54.736 xnvme_bdev : 5.01 25061.22 97.90 0.00 0.00 2546.18 894.87 8422.30 00:17:54.736 [2024-11-22T08:38:29.823Z] =================================================================================================================== 00:17:54.736 [2024-11-22T08:38:29.823Z] Total : 25061.22 97.90 0.00 0.00 2546.18 894.87 8422.30 00:17:56.141 08:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:56.142 08:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:56.142 08:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:56.142 08:38:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:56.142 08:38:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.142 { 00:17:56.142 "subsystems": [ 00:17:56.142 { 00:17:56.142 "subsystem": "bdev", 00:17:56.142 "config": [ 00:17:56.142 { 00:17:56.142 "params": { 00:17:56.142 "io_mechanism": "io_uring_cmd", 00:17:56.142 "conserve_cpu": true, 00:17:56.142 "filename": "/dev/ng0n1", 00:17:56.142 "name": "xnvme_bdev" 00:17:56.142 }, 00:17:56.142 "method": "bdev_xnvme_create" 00:17:56.142 }, 00:17:56.142 { 00:17:56.142 "method": "bdev_wait_for_examine" 00:17:56.142 } 00:17:56.142 ] 00:17:56.142 } 00:17:56.142 ] 00:17:56.142 } 00:17:56.142 [2024-11-22 08:38:30.874479] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:17:56.142 [2024-11-22 08:38:30.874604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73012 ] 00:17:56.142 [2024-11-22 08:38:31.058777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.142 [2024-11-22 08:38:31.208264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.710 Running I/O for 5 seconds... 00:17:58.588 28797.00 IOPS, 112.49 MiB/s [2024-11-22T08:38:34.615Z] 25246.50 IOPS, 98.62 MiB/s [2024-11-22T08:38:35.996Z] 25023.00 IOPS, 97.75 MiB/s [2024-11-22T08:38:36.935Z] 24735.25 IOPS, 96.62 MiB/s 00:18:01.848 Latency(us) 00:18:01.848 [2024-11-22T08:38:36.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.848 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:01.848 xnvme_bdev : 5.00 25257.92 98.66 0.00 0.00 2525.86 717.21 8580.22 00:18:01.848 [2024-11-22T08:38:36.935Z] =================================================================================================================== 00:18:01.848 [2024-11-22T08:38:36.935Z] Total : 25257.92 98.66 0.00 0.00 2525.86 717.21 8580.22 00:18:02.786 08:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:02.786 08:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:02.786 08:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:02.786 08:38:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:02.786 08:38:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:03.045 { 00:18:03.045 "subsystems": [ 00:18:03.045 { 00:18:03.045 "subsystem": "bdev", 00:18:03.045 "config": [ 00:18:03.045 { 00:18:03.045 "params": { 00:18:03.045 "io_mechanism": "io_uring_cmd", 00:18:03.045 "conserve_cpu": true, 00:18:03.045 "filename": "/dev/ng0n1", 00:18:03.045 "name": "xnvme_bdev" 00:18:03.045 }, 00:18:03.045 "method": "bdev_xnvme_create" 00:18:03.045 }, 00:18:03.045 { 00:18:03.045 "method": "bdev_wait_for_examine" 00:18:03.045 } 00:18:03.045 ] 00:18:03.045 } 00:18:03.045 ] 00:18:03.045 } 00:18:03.045 [2024-11-22 08:38:37.935201] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:03.045 [2024-11-22 08:38:37.935319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73092 ] 00:18:03.045 [2024-11-22 08:38:38.117539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.304 [2024-11-22 08:38:38.261987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.871 Running I/O for 5 seconds... 00:18:05.745 69504.00 IOPS, 271.50 MiB/s [2024-11-22T08:38:41.769Z] 69216.00 IOPS, 270.38 MiB/s [2024-11-22T08:38:42.706Z] 69184.00 IOPS, 270.25 MiB/s [2024-11-22T08:38:44.084Z] 69216.00 IOPS, 270.38 MiB/s [2024-11-22T08:38:44.084Z] 69286.40 IOPS, 270.65 MiB/s 00:18:08.997 Latency(us) 00:18:08.997 [2024-11-22T08:38:44.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.997 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:08.997 xnvme_bdev : 5.00 69276.06 270.61 0.00 0.00 920.98 549.42 2447.73 00:18:08.997 [2024-11-22T08:38:44.084Z] =================================================================================================================== 00:18:08.997 [2024-11-22T08:38:44.084Z] Total : 69276.06 270.61 0.00 0.00 920.98 549.42 2447.73 00:18:09.934 08:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:09.934 08:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:09.934 08:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:09.934 08:38:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:09.934 08:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 { 00:18:09.934 "subsystems": [ 00:18:09.934 { 00:18:09.934 "subsystem": "bdev", 00:18:09.934 "config": [ 00:18:09.934 { 00:18:09.934 "params": { 00:18:09.934 "io_mechanism": "io_uring_cmd", 00:18:09.934 "conserve_cpu": true, 00:18:09.934 "filename": "/dev/ng0n1", 00:18:09.934 "name": "xnvme_bdev" 00:18:09.934 }, 00:18:09.934 "method": "bdev_xnvme_create" 00:18:09.934 }, 00:18:09.934 { 00:18:09.934 "method": "bdev_wait_for_examine" 00:18:09.934 } 00:18:09.934 ] 00:18:09.934 } 00:18:09.934 ] 00:18:09.934 } 00:18:09.934 [2024-11-22 08:38:44.991877] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:09.934 [2024-11-22 08:38:44.992020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:18:10.193 [2024-11-22 08:38:45.170644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.452 [2024-11-22 08:38:45.323505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.710 Running I/O for 5 seconds... 00:18:12.653 65088.00 IOPS, 254.25 MiB/s [2024-11-22T08:38:49.112Z] 63461.50 IOPS, 247.90 MiB/s [2024-11-22T08:38:50.044Z] 61988.33 IOPS, 242.14 MiB/s [2024-11-22T08:38:50.978Z] 61606.50 IOPS, 240.65 MiB/s [2024-11-22T08:38:50.978Z] 58968.40 IOPS, 230.35 MiB/s 00:18:15.891 Latency(us) 00:18:15.891 [2024-11-22T08:38:50.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.891 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:15.891 xnvme_bdev : 5.00 58939.20 230.23 0.00 0.00 1081.32 51.61 17897.38 00:18:15.891 [2024-11-22T08:38:50.978Z] =================================================================================================================== 00:18:15.891 [2024-11-22T08:38:50.978Z] Total : 58939.20 230.23 0.00 0.00 1081.32 51.61 17897.38 00:18:17.269 00:18:17.269 real 0m28.274s 00:18:17.269 user 0m17.821s 00:18:17.269 sys 0m8.163s 00:18:17.269 08:38:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.269 08:38:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 ************************************ 00:18:17.269 END TEST xnvme_bdevperf 00:18:17.269 ************************************ 00:18:17.269 08:38:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:17.269 08:38:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:17.269 08:38:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.269 08:38:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 ************************************ 00:18:17.269 START TEST xnvme_fio_plugin 00:18:17.269 ************************************ 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:17.269 { 00:18:17.269 "subsystems": [ 00:18:17.269 { 00:18:17.269 "subsystem": "bdev", 00:18:17.269 "config": [ 00:18:17.269 { 00:18:17.269 "params": { 00:18:17.269 "io_mechanism": "io_uring_cmd", 00:18:17.269 "conserve_cpu": true, 00:18:17.269 "filename": "/dev/ng0n1", 00:18:17.269 "name": "xnvme_bdev" 00:18:17.269 }, 00:18:17.269 "method": "bdev_xnvme_create" 00:18:17.269 }, 00:18:17.269 { 00:18:17.269 "method": "bdev_wait_for_examine" 00:18:17.269 } 00:18:17.269 ] 00:18:17.269 } 00:18:17.269 ] 00:18:17.269 } 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.269 08:38:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.269 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:17.269 fio-3.35 00:18:17.269 Starting 1 thread 00:18:23.905 00:18:23.905 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73301: Fri Nov 22 08:38:58 2024 00:18:23.905 read: IOPS=22.8k, BW=89.2MiB/s (93.6MB/s)(447MiB/5003msec) 00:18:23.905 slat (usec): min=2, max=211, avg= 8.07, stdev= 4.05 00:18:23.905 clat (usec): min=916, max=3211, avg=2476.33, stdev=410.69 00:18:23.905 lat (usec): min=918, max=3238, avg=2484.40, stdev=412.31 00:18:23.905 clat percentiles (usec): 00:18:23.905 | 1.00th=[ 1106], 5.00th=[ 1319], 10.00th=[ 2114], 20.00th=[ 2343], 00:18:23.905 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:18:23.905 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:18:23.905 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3064], 99.95th=[ 3097], 00:18:23.905 | 99.99th=[ 3163] 00:18:23.905 bw ( KiB/s): min=86016, max=115712, per=100.00%, avg=92007.67, stdev=10529.07, samples=9 00:18:23.905 iops : min=21504, max=28928, avg=23001.89, stdev=2632.28, samples=9 00:18:23.905 lat (usec) : 1000=0.18% 00:18:23.905 lat (msec) : 2=9.38%, 4=90.44% 00:18:23.905 cpu : usr=41.94%, sys=54.14%, ctx=12, majf=0, minf=762 00:18:23.905 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:23.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.905 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:23.905 issued rwts: total=114304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:23.905 00:18:23.905 Run status group 0 (all jobs): 00:18:23.905 READ: bw=89.2MiB/s (93.6MB/s), 89.2MiB/s-89.2MiB/s (93.6MB/s-93.6MB/s), io=447MiB (468MB), run=5003-5003msec 00:18:24.845 ----------------------------------------------------- 00:18:24.845 Suppressions used: 00:18:24.845 count bytes template 00:18:24.845 1 11 /usr/src/fio/parse.c 00:18:24.845 1 8 libtcmalloc_minimal.so 00:18:24.845 1 904 libcrypto.so 00:18:24.845 ----------------------------------------------------- 00:18:24.845 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:24.845 { 00:18:24.845 "subsystems": [ 00:18:24.845 { 00:18:24.845 "subsystem": "bdev", 00:18:24.845 "config": [ 00:18:24.845 { 00:18:24.845 "params": { 00:18:24.845 "io_mechanism": "io_uring_cmd", 00:18:24.845 "conserve_cpu": true, 00:18:24.845 "filename": "/dev/ng0n1", 00:18:24.845 "name": "xnvme_bdev" 00:18:24.845 }, 00:18:24.845 "method": "bdev_xnvme_create" 00:18:24.845 }, 00:18:24.845 { 00:18:24.845 "method": "bdev_wait_for_examine" 00:18:24.845 } 00:18:24.845 ] 00:18:24.845 } 00:18:24.845 ] 00:18:24.845 } 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.845 08:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.845 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:24.845 fio-3.35 00:18:24.845 Starting 1 thread 00:18:31.420 00:18:31.420 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73392: Fri Nov 22 08:39:05 2024 00:18:31.420 write: IOPS=24.7k, BW=96.7MiB/s (101MB/s)(484MiB/5002msec); 0 zone resets 00:18:31.420 slat (usec): min=2, max=188, avg= 7.73, stdev= 4.17 00:18:31.420 clat (usec): min=801, max=4480, avg=2273.33, stdev=578.14 00:18:31.420 lat (usec): min=804, max=4509, avg=2281.06, stdev=580.51 00:18:31.420 clat percentiles (usec): 00:18:31.420 | 1.00th=[ 1012], 5.00th=[ 1139], 10.00th=[ 1237], 20.00th=[ 1598], 00:18:31.420 | 30.00th=[ 2245], 40.00th=[ 2376], 50.00th=[ 2474], 60.00th=[ 2540], 00:18:31.420 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2900], 00:18:31.420 | 99.00th=[ 3032], 99.50th=[ 3097], 99.90th=[ 3326], 99.95th=[ 3392], 00:18:31.420 | 99.99th=[ 3490] 00:18:31.420 bw ( KiB/s): min=84992, max=126464, per=99.78%, avg=98759.11, stdev=15856.39, samples=9 00:18:31.420 iops : min=21248, max=31616, avg=24689.78, stdev=3964.10, samples=9 00:18:31.420 lat (usec) : 1000=0.78% 00:18:31.420 lat (msec) : 2=24.32%, 4=74.89%, 10=0.01% 00:18:31.420 cpu : usr=48.95%, sys=47.51%, ctx=9, majf=0, minf=762 00:18:31.420 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:31.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.420 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:31.420 issued rwts: total=0,123776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.420 00:18:31.420 Run status group 0 (all jobs): 00:18:31.420 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=484MiB (507MB), run=5002-5002msec 00:18:31.988 ----------------------------------------------------- 00:18:31.988 Suppressions used: 00:18:31.988 count bytes template 00:18:31.988 1 11 /usr/src/fio/parse.c 00:18:31.988 1 8 libtcmalloc_minimal.so 00:18:31.988 1 904 libcrypto.so 00:18:31.988 ----------------------------------------------------- 00:18:31.988 00:18:31.988 00:18:31.988 real 0m15.019s 00:18:31.988 user 0m8.619s 00:18:31.988 sys 0m5.781s 00:18:31.989 ************************************ 00:18:31.989 END TEST xnvme_fio_plugin 00:18:31.989 ************************************ 00:18:31.989 08:39:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.989 08:39:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.249 08:39:07 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72845 00:18:32.249 08:39:07 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72845 ']' 00:18:32.249 08:39:07 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72845 00:18:32.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72845) - No such process 00:18:32.249 Process with pid 72845 is not found 00:18:32.249 08:39:07 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72845 is not found' 00:18:32.249 00:18:32.249 real 3m50.429s 00:18:32.249 user 2m4.810s 00:18:32.249 sys 1m28.496s 00:18:32.249 08:39:07 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.249 ************************************ 00:18:32.249 END TEST nvme_xnvme 00:18:32.249 ************************************ 00:18:32.249 08:39:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.249 08:39:07 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:32.249 08:39:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.249 08:39:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.249 08:39:07 -- common/autotest_common.sh@10 -- # set +x 00:18:32.249 ************************************ 00:18:32.249 START TEST blockdev_xnvme 00:18:32.249 ************************************ 00:18:32.249 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:32.249 * Looking for test storage... 00:18:32.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.509 08:39:07 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:32.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.509 --rc genhtml_branch_coverage=1 00:18:32.509 --rc genhtml_function_coverage=1 00:18:32.509 --rc genhtml_legend=1 00:18:32.509 --rc geninfo_all_blocks=1 00:18:32.509 --rc geninfo_unexecuted_blocks=1 00:18:32.509 00:18:32.509 ' 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:32.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.509 --rc genhtml_branch_coverage=1 00:18:32.509 --rc genhtml_function_coverage=1 00:18:32.509 --rc genhtml_legend=1 00:18:32.509 --rc geninfo_all_blocks=1 00:18:32.509 --rc geninfo_unexecuted_blocks=1 00:18:32.509 00:18:32.509 ' 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:32.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.509 --rc genhtml_branch_coverage=1 00:18:32.509 --rc genhtml_function_coverage=1 00:18:32.509 --rc genhtml_legend=1 00:18:32.509 --rc geninfo_all_blocks=1 00:18:32.509 --rc geninfo_unexecuted_blocks=1 00:18:32.509 00:18:32.509 ' 00:18:32.509 08:39:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:32.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.509 --rc genhtml_branch_coverage=1 00:18:32.509 --rc genhtml_function_coverage=1 00:18:32.509 --rc genhtml_legend=1 00:18:32.509 --rc geninfo_all_blocks=1 00:18:32.510 --rc geninfo_unexecuted_blocks=1 00:18:32.510 00:18:32.510 ' 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73532 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73532 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73532 ']' 00:18:32.510 08:39:07 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.510 08:39:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 [2024-11-22 08:39:07.570230] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:32.510 [2024-11-22 08:39:07.570533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73532 ] 00:18:32.769 [2024-11-22 08:39:07.756463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.028 [2024-11-22 08:39:07.857851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.967 08:39:08 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.967 08:39:08 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:33.967 08:39:08 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:33.967 08:39:08 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:18:33.967 08:39:08 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:33.967 08:39:08 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:33.967 08:39:08 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:34.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:35.106 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:35.106 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:35.106 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:35.106 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:35.107 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.107 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:35.367 nvme0n1 00:18:35.367 nvme0n2 00:18:35.367 nvme0n3 00:18:35.367 nvme1n1 00:18:35.367 nvme2n1 00:18:35.367 nvme3n1 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:35.367 08:39:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:35.367 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:35.368 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "724e79e2-2a83-4bcd-a97e-a21750b387f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "724e79e2-2a83-4bcd-a97e-a21750b387f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ba080241-bc37-49fc-9b6b-fe6a3c8b229a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba080241-bc37-49fc-9b6b-fe6a3c8b229a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "971e131e-14ad-423d-9332-ab530adcca09"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "971e131e-14ad-423d-9332-ab530adcca09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "24dcc808-f066-4915-a5e8-777bc75df08f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "24dcc808-f066-4915-a5e8-777bc75df08f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8cc63f06-9ec2-47ed-a386-476ed93e23c2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8cc63f06-9ec2-47ed-a386-476ed93e23c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6c055e7f-6d58-4ab3-8cf0-4f975a4b5955"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6c055e7f-6d58-4ab3-8cf0-4f975a4b5955",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:35.627 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:35.627 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:18:35.627 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:35.627 08:39:10 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73532 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73532 ']' 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73532 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73532 00:18:35.627 killing process with pid 73532 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73532' 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73532 00:18:35.627 08:39:10 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73532 00:18:38.164 08:39:12 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:38.164 08:39:12 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:38.164 08:39:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.164 08:39:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.164 08:39:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.164 ************************************ 00:18:38.164 START TEST bdev_hello_world 00:18:38.164 ************************************ 00:18:38.164 08:39:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:38.164 [2024-11-22 08:39:12.878952] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:38.164 [2024-11-22 08:39:12.879266] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73827 ] 00:18:38.164 [2024-11-22 08:39:13.066943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.164 [2024-11-22 08:39:13.172149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.731 [2024-11-22 08:39:13.592591] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:38.731 [2024-11-22 08:39:13.592802] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:38.731 [2024-11-22 08:39:13.592831] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:38.731 [2024-11-22 08:39:13.594960] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:38.731 [2024-11-22 08:39:13.595301] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:38.731 [2024-11-22 08:39:13.595321] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:38.731 [2024-11-22 08:39:13.595541] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:38.731 00:18:38.731 [2024-11-22 08:39:13.595564] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:39.670 ************************************ 00:18:39.670 END TEST bdev_hello_world 00:18:39.670 ************************************ 00:18:39.670 00:18:39.670 real 0m1.847s 00:18:39.670 user 0m1.489s 00:18:39.670 sys 0m0.241s 00:18:39.670 08:39:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.670 08:39:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:39.670 08:39:14 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:39.670 08:39:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.670 08:39:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.670 08:39:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.670 ************************************ 00:18:39.670 START TEST bdev_bounds 00:18:39.670 ************************************ 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73869 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:39.670 Process bdevio pid: 73869 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73869' 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73869 00:18:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73869 ']' 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.670 08:39:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:39.930 [2024-11-22 08:39:14.802877] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:39.930 [2024-11-22 08:39:14.803021] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73869 ] 00:18:39.930 [2024-11-22 08:39:14.981975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.189 [2024-11-22 08:39:15.089724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.189 [2024-11-22 08:39:15.089849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.189 [2024-11-22 08:39:15.089879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.758 08:39:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.758 08:39:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:40.758 08:39:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:40.758 I/O targets: 00:18:40.758 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:40.758 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:40.758 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:40.758 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:40.758 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:40.758 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:40.758 00:18:40.758 00:18:40.758 CUnit - A unit testing framework for C - Version 2.1-3 00:18:40.758 http://cunit.sourceforge.net/ 00:18:40.758 00:18:40.758 00:18:40.758 Suite: bdevio tests on: nvme3n1 00:18:40.758 Test: blockdev write read block ...passed 00:18:40.758 Test: blockdev write zeroes read block ...passed 00:18:40.758 Test: blockdev write zeroes read no split ...passed 00:18:40.758 Test: blockdev write zeroes read split ...passed 00:18:40.758 Test: blockdev write zeroes read split partial ...passed 00:18:40.758 Test: blockdev reset ...passed 00:18:40.758 Test: blockdev write read 8 blocks ...passed 00:18:40.758 Test: blockdev write read size > 128k ...passed 00:18:40.758 Test: blockdev write read invalid size ...passed 00:18:40.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:40.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:40.758 Test: blockdev write read max offset ...passed 00:18:40.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:40.758 Test: blockdev writev readv 8 blocks ...passed 00:18:40.758 Test: blockdev writev readv 30 x 1block ...passed 00:18:40.758 Test: blockdev writev readv block ...passed 00:18:40.758 Test: blockdev writev readv size > 128k ...passed 00:18:40.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:40.758 Test: blockdev comparev and writev ...passed 00:18:40.758 Test: blockdev nvme passthru rw ...passed 00:18:40.758 Test: blockdev nvme passthru vendor specific ...passed 00:18:40.758 Test: blockdev nvme admin passthru ...passed 00:18:40.758 Test: blockdev copy ...passed 00:18:40.758 Suite: bdevio tests on: nvme2n1 00:18:40.758 Test: blockdev write read block ...passed 00:18:40.758 Test: blockdev write zeroes read block ...passed 00:18:40.758 Test: blockdev write zeroes read no split ...passed 00:18:41.018 Test: blockdev write zeroes read split ...passed 00:18:41.018 Test: blockdev write zeroes read split partial ...passed 00:18:41.018 Test: blockdev reset ...passed 00:18:41.018 Test: blockdev write read 8 blocks ...passed 00:18:41.018 Test: blockdev write read size > 128k ...passed 00:18:41.018 Test: blockdev write read invalid size ...passed 00:18:41.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.018 Test: blockdev write read max offset ...passed 00:18:41.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.018 Test: blockdev writev readv 8 blocks ...passed 00:18:41.018 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.018 Test: blockdev writev readv block ...passed 00:18:41.018 Test: blockdev writev readv size > 128k ...passed 00:18:41.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.018 Test: blockdev comparev and writev ...passed 00:18:41.018 Test: blockdev nvme passthru rw ...passed 00:18:41.018 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.018 Test: blockdev nvme admin passthru ...passed 00:18:41.018 Test: blockdev copy ...passed 00:18:41.018 Suite: bdevio tests on: nvme1n1 00:18:41.018 Test: blockdev write read block ...passed 00:18:41.018 Test: blockdev write zeroes read block ...passed 00:18:41.018 Test: blockdev write zeroes read no split ...passed 00:18:41.018 Test: blockdev write zeroes read split ...passed 00:18:41.018 Test: blockdev write zeroes read split partial ...passed 00:18:41.018 Test: blockdev reset ...passed 00:18:41.018 Test: blockdev write read 8 blocks ...passed 00:18:41.018 Test: blockdev write read size > 128k ...passed 00:18:41.018 Test: blockdev write read invalid size ...passed 00:18:41.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.018 Test: blockdev write read max offset ...passed 00:18:41.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.018 Test: blockdev writev readv 8 blocks ...passed 00:18:41.018 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.018 Test: blockdev writev readv block ...passed 00:18:41.018 Test: blockdev writev readv size > 128k ...passed 00:18:41.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.018 Test: blockdev comparev and writev ...passed 00:18:41.018 Test: blockdev nvme passthru rw ...passed 00:18:41.018 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.018 Test: blockdev nvme admin passthru ...passed 00:18:41.018 Test: blockdev copy ...passed 00:18:41.018 Suite: bdevio tests on: nvme0n3 00:18:41.018 Test: blockdev write read block ...passed 00:18:41.018 Test: blockdev write zeroes read block ...passed 00:18:41.018 Test: blockdev write zeroes read no split ...passed 00:18:41.018 Test: blockdev write zeroes read split ...passed 00:18:41.018 Test: blockdev write zeroes read split partial ...passed 00:18:41.018 Test: blockdev reset ...passed 00:18:41.018 Test: blockdev write read 8 blocks ...passed 00:18:41.018 Test: blockdev write read size > 128k ...passed 00:18:41.018 Test: blockdev write read invalid size ...passed 00:18:41.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.018 Test: blockdev write read max offset ...passed 00:18:41.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.018 Test: blockdev writev readv 8 blocks ...passed 00:18:41.018 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.018 Test: blockdev writev readv block ...passed 00:18:41.018 Test: blockdev writev readv size > 128k ...passed 00:18:41.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.018 Test: blockdev comparev and writev ...passed 00:18:41.018 Test: blockdev nvme passthru rw ...passed 00:18:41.018 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.018 Test: blockdev nvme admin passthru ...passed 00:18:41.018 Test: blockdev copy ...passed 00:18:41.018 Suite: bdevio tests on: nvme0n2 00:18:41.018 Test: blockdev write read block ...passed 00:18:41.018 Test: blockdev write zeroes read block ...passed 00:18:41.018 Test: blockdev write zeroes read no split ...passed 00:18:41.277 Test: blockdev write zeroes read split ...passed 00:18:41.277 Test: blockdev write zeroes read split partial ...passed 00:18:41.277 Test: blockdev reset ...passed 00:18:41.277 Test: blockdev write read 8 blocks ...passed 00:18:41.277 Test: blockdev write read size > 128k ...passed 00:18:41.277 Test: blockdev write read invalid size ...passed 00:18:41.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.277 Test: blockdev write read max offset ...passed 00:18:41.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.277 Test: blockdev writev readv 8 blocks ...passed 00:18:41.277 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.277 Test: blockdev writev readv block ...passed 00:18:41.277 Test: blockdev writev readv size > 128k ...passed 00:18:41.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.278 Test: blockdev comparev and writev ...passed 00:18:41.278 Test: blockdev nvme passthru rw ...passed 00:18:41.278 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.278 Test: blockdev nvme admin passthru ...passed 00:18:41.278 Test: blockdev copy ...passed 00:18:41.278 Suite: bdevio tests on: nvme0n1 00:18:41.278 Test: blockdev write read block ...passed 00:18:41.278 Test: blockdev write zeroes read block ...passed 00:18:41.278 Test: blockdev write zeroes read no split ...passed 00:18:41.278 Test: blockdev write zeroes read split ...passed 00:18:41.278 Test: blockdev write zeroes read split partial ...passed 00:18:41.278 Test: blockdev reset ...passed 00:18:41.278 Test: blockdev write read 8 blocks ...passed 00:18:41.278 Test: blockdev write read size > 128k ...passed 00:18:41.278 Test: blockdev write read invalid size ...passed 00:18:41.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.278 Test: blockdev write read max offset ...passed 00:18:41.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.278 Test: blockdev writev readv 8 blocks ...passed 00:18:41.278 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.278 Test: blockdev writev readv block ...passed 00:18:41.278 Test: blockdev writev readv size > 128k ...passed 00:18:41.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.278 Test: blockdev comparev and writev ...passed 00:18:41.278 Test: blockdev nvme passthru rw ...passed 00:18:41.278 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.278 Test: blockdev nvme admin passthru ...passed 00:18:41.278 Test: blockdev copy ...passed 00:18:41.278 00:18:41.278 Run Summary: Type Total Ran Passed Failed Inactive 00:18:41.278 suites 6 6 n/a 0 0 00:18:41.278 tests 138 138 138 0 0 00:18:41.278 asserts 780 780 780 0 n/a 00:18:41.278 00:18:41.278 Elapsed time = 1.322 seconds 00:18:41.278 0 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73869 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73869 ']' 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73869 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73869 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73869' 00:18:41.278 killing process with pid 73869 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73869 00:18:41.278 08:39:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73869 00:18:42.657 08:39:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:42.657 ************************************ 00:18:42.657 END TEST bdev_bounds 00:18:42.657 ************************************ 00:18:42.657 00:18:42.657 real 0m2.666s 00:18:42.657 user 0m6.587s 00:18:42.657 sys 0m0.444s 00:18:42.657 08:39:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.657 08:39:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:42.657 08:39:17 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:42.657 08:39:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:42.657 08:39:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.657 08:39:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.657 ************************************ 00:18:42.657 START TEST bdev_nbd 00:18:42.657 ************************************ 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:42.657 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73923 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73923 /var/tmp/spdk-nbd.sock 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:42.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.658 08:39:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:42.658 [2024-11-22 08:39:17.581376] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:18:42.658 [2024-11-22 08:39:17.581673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.917 [2024-11-22 08:39:17.771603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.917 [2024-11-22 08:39:17.874994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:43.485 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.745 1+0 records in 00:18:43.745 1+0 records out 00:18:43.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699298 s, 5.9 MB/s 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:43.745 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.004 1+0 records in 00:18:44.004 1+0 records out 00:18:44.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689093 s, 5.9 MB/s 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:44.004 08:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.263 1+0 records in 00:18:44.263 1+0 records out 00:18:44.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131601 s, 3.1 MB/s 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:44.263 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.523 1+0 records in 00:18:44.523 1+0 records out 00:18:44.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729915 s, 5.6 MB/s 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:44.523 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.783 1+0 records in 00:18:44.783 1+0 records out 00:18:44.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924919 s, 4.4 MB/s 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:44.783 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.044 1+0 records in 00:18:45.044 1+0 records out 00:18:45.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794193 s, 5.2 MB/s 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:45.044 08:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd0", 00:18:45.304 "bdev_name": "nvme0n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd1", 00:18:45.304 "bdev_name": "nvme0n2" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd2", 00:18:45.304 "bdev_name": "nvme0n3" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd3", 00:18:45.304 "bdev_name": "nvme1n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd4", 00:18:45.304 "bdev_name": "nvme2n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd5", 00:18:45.304 "bdev_name": "nvme3n1" 00:18:45.304 } 00:18:45.304 ]' 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd0", 00:18:45.304 "bdev_name": "nvme0n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd1", 00:18:45.304 "bdev_name": "nvme0n2" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd2", 00:18:45.304 "bdev_name": "nvme0n3" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd3", 00:18:45.304 "bdev_name": "nvme1n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd4", 00:18:45.304 "bdev_name": "nvme2n1" 00:18:45.304 }, 00:18:45.304 { 00:18:45.304 "nbd_device": "/dev/nbd5", 00:18:45.304 "bdev_name": "nvme3n1" 00:18:45.304 } 00:18:45.304 ]' 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.304 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.562 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.821 08:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.081 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.340 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.600 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:46.860 08:39:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:47.121 /dev/nbd0 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.121 1+0 records in 00:18:47.121 1+0 records out 00:18:47.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676429 s, 6.1 MB/s 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:47.121 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:18:47.381 /dev/nbd1 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.381 1+0 records in 00:18:47.381 1+0 records out 00:18:47.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681845 s, 6.0 MB/s 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:47.381 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:18:47.640 /dev/nbd10 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.641 1+0 records in 00:18:47.641 1+0 records out 00:18:47.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818852 s, 5.0 MB/s 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:47.641 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:18:47.900 /dev/nbd11 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.900 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.901 1+0 records in 00:18:47.901 1+0 records out 00:18:47.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685759 s, 6.0 MB/s 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:47.901 08:39:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:48.160 /dev/nbd12 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.160 1+0 records in 00:18:48.160 1+0 records out 00:18:48.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818983 s, 5.0 MB/s 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:48.160 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:48.420 /dev/nbd13 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.420 1+0 records in 00:18:48.420 1+0 records out 00:18:48.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733818 s, 5.6 MB/s 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.420 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd0", 00:18:48.680 "bdev_name": "nvme0n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd1", 00:18:48.680 "bdev_name": "nvme0n2" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd10", 00:18:48.680 "bdev_name": "nvme0n3" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd11", 00:18:48.680 "bdev_name": "nvme1n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd12", 00:18:48.680 "bdev_name": "nvme2n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd13", 00:18:48.680 "bdev_name": "nvme3n1" 00:18:48.680 } 00:18:48.680 ]' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd0", 00:18:48.680 "bdev_name": "nvme0n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd1", 00:18:48.680 "bdev_name": "nvme0n2" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd10", 00:18:48.680 "bdev_name": "nvme0n3" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd11", 00:18:48.680 "bdev_name": "nvme1n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd12", 00:18:48.680 "bdev_name": "nvme2n1" 00:18:48.680 }, 00:18:48.680 { 00:18:48.680 "nbd_device": "/dev/nbd13", 00:18:48.680 "bdev_name": "nvme3n1" 00:18:48.680 } 00:18:48.680 ]' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:48.680 /dev/nbd1 00:18:48.680 /dev/nbd10 00:18:48.680 /dev/nbd11 00:18:48.680 /dev/nbd12 00:18:48.680 /dev/nbd13' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:48.680 /dev/nbd1 00:18:48.680 /dev/nbd10 00:18:48.680 /dev/nbd11 00:18:48.680 /dev/nbd12 00:18:48.680 /dev/nbd13' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:48.680 256+0 records in 00:18:48.680 256+0 records out 00:18:48.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011826 s, 88.7 MB/s 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:48.680 256+0 records in 00:18:48.680 256+0 records out 00:18:48.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124845 s, 8.4 MB/s 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:48.680 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:48.940 256+0 records in 00:18:48.940 256+0 records out 00:18:48.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13258 s, 7.9 MB/s 00:18:48.940 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:48.940 08:39:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:48.940 256+0 records in 00:18:48.940 256+0 records out 00:18:48.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127436 s, 8.2 MB/s 00:18:48.940 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:48.940 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:49.199 256+0 records in 00:18:49.199 256+0 records out 00:18:49.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129095 s, 8.1 MB/s 00:18:49.199 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:49.199 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:49.459 256+0 records in 00:18:49.459 256+0 records out 00:18:49.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158652 s, 6.6 MB/s 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:49.459 256+0 records in 00:18:49.459 256+0 records out 00:18:49.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134174 s, 7.8 MB/s 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.459 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.720 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.002 08:39:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.270 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.530 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:50.790 08:39:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:51.049 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:51.309 malloc_lvol_verify 00:18:51.309 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:51.569 d41d8c5f-8577-474b-83bc-c99ee4b31878 00:18:51.569 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:51.828 fddee8d2-fe4b-45e7-9259-00573aad5d36 00:18:51.828 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:52.088 /dev/nbd0 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:52.088 mke2fs 1.47.0 (5-Feb-2023) 00:18:52.088 Discarding device blocks: 0/4096 done 00:18:52.088 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:52.088 00:18:52.088 Allocating group tables: 0/1 done 00:18:52.088 Writing inode tables: 0/1 done 00:18:52.088 Creating journal (1024 blocks): done 00:18:52.088 Writing superblocks and filesystem accounting information: 0/1 done 00:18:52.088 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:52.088 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.089 08:39:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73923 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73923 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.348 killing process with pid 73923 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73923 00:18:52.348 08:39:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73923 00:18:53.287 08:39:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:53.287 00:18:53.287 real 0m10.881s 00:18:53.287 user 0m13.859s 00:18:53.287 sys 0m4.809s 00:18:53.287 08:39:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.287 ************************************ 00:18:53.287 END TEST bdev_nbd 00:18:53.287 ************************************ 00:18:53.287 08:39:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:53.547 08:39:28 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:53.547 08:39:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:18:53.547 08:39:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:18:53.547 08:39:28 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:53.547 08:39:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.547 08:39:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.547 08:39:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.547 ************************************ 00:18:53.547 START TEST bdev_fio 00:18:53.547 ************************************ 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:53.547 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:53.547 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:53.548 ************************************ 00:18:53.548 START TEST bdev_fio_rw_verify 00:18:53.548 ************************************ 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:53.548 08:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:53.808 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:53.808 fio-3.35 00:18:53.808 Starting 6 threads 00:19:06.019 00:19:06.019 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74331: Fri Nov 22 08:39:39 2024 00:19:06.019 read: IOPS=32.6k, BW=127MiB/s (134MB/s)(1275MiB/10001msec) 00:19:06.019 slat (usec): min=2, max=2318, avg= 7.83, stdev= 7.54 00:19:06.019 clat (usec): min=72, max=3879, avg=564.31, stdev=238.75 00:19:06.019 lat (usec): min=77, max=3895, avg=572.14, stdev=240.12 00:19:06.019 clat percentiles (usec): 00:19:06.019 | 50.000th=[ 570], 99.000th=[ 1172], 99.900th=[ 1844], 99.990th=[ 3523], 00:19:06.019 | 99.999th=[ 3851] 00:19:06.019 write: IOPS=33.1k, BW=129MiB/s (135MB/s)(1292MiB/10001msec); 0 zone resets 00:19:06.019 slat (usec): min=10, max=4193, avg=24.51, stdev=33.03 00:19:06.019 clat (usec): min=77, max=8042, avg=660.31, stdev=253.21 00:19:06.019 lat (usec): min=98, max=8059, avg=684.82, stdev=257.37 00:19:06.019 clat percentiles (usec): 00:19:06.019 | 50.000th=[ 668], 99.000th=[ 1401], 99.900th=[ 1975], 99.990th=[ 3032], 00:19:06.019 | 99.999th=[ 8029] 00:19:06.019 bw ( KiB/s): min=107295, max=159295, per=99.23%, avg=131229.68, stdev=2362.60, samples=114 00:19:06.019 iops : min=26823, max=39823, avg=32807.11, stdev=590.64, samples=114 00:19:06.019 lat (usec) : 100=0.01%, 250=7.21%, 500=25.12%, 750=40.57%, 1000=22.25% 00:19:06.019 lat (msec) : 2=4.76%, 4=0.08%, 10=0.01% 00:19:06.019 cpu : usr=57.19%, sys=28.65%, ctx=8161, majf=0, minf=27247 00:19:06.019 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.019 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.019 issued rwts: total=326400,330644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.019 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:06.019 00:19:06.019 Run status group 0 (all jobs): 00:19:06.019 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=1275MiB (1337MB), run=10001-10001msec 00:19:06.019 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1292MiB (1354MB), run=10001-10001msec 00:19:06.019 ----------------------------------------------------- 00:19:06.019 Suppressions used: 00:19:06.019 count bytes template 00:19:06.019 6 48 /usr/src/fio/parse.c 00:19:06.019 4028 386688 /usr/src/fio/iolog.c 00:19:06.019 1 8 libtcmalloc_minimal.so 00:19:06.019 1 904 libcrypto.so 00:19:06.019 ----------------------------------------------------- 00:19:06.019 00:19:06.019 00:19:06.019 real 0m12.515s 00:19:06.019 user 0m36.338s 00:19:06.019 sys 0m17.605s 00:19:06.019 08:39:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.019 08:39:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:06.019 ************************************ 00:19:06.019 END TEST bdev_fio_rw_verify 00:19:06.019 ************************************ 00:19:06.019 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "724e79e2-2a83-4bcd-a97e-a21750b387f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "724e79e2-2a83-4bcd-a97e-a21750b387f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ba080241-bc37-49fc-9b6b-fe6a3c8b229a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba080241-bc37-49fc-9b6b-fe6a3c8b229a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "971e131e-14ad-423d-9332-ab530adcca09"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "971e131e-14ad-423d-9332-ab530adcca09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "24dcc808-f066-4915-a5e8-777bc75df08f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "24dcc808-f066-4915-a5e8-777bc75df08f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8cc63f06-9ec2-47ed-a386-476ed93e23c2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8cc63f06-9ec2-47ed-a386-476ed93e23c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6c055e7f-6d58-4ab3-8cf0-4f975a4b5955"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6c055e7f-6d58-4ab3-8cf0-4f975a4b5955",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:06.279 /home/vagrant/spdk_repo/spdk 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:06.279 00:19:06.279 real 0m12.754s 00:19:06.279 user 0m36.450s 00:19:06.279 sys 0m17.737s 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.279 08:39:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:06.279 ************************************ 00:19:06.279 END TEST bdev_fio 00:19:06.279 ************************************ 00:19:06.279 08:39:41 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:06.279 08:39:41 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:06.279 08:39:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:06.279 08:39:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.279 08:39:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:06.279 ************************************ 00:19:06.279 START TEST bdev_verify 00:19:06.279 ************************************ 00:19:06.279 08:39:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:06.279 [2024-11-22 08:39:41.348177] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:06.279 [2024-11-22 08:39:41.348307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74505 ] 00:19:06.539 [2024-11-22 08:39:41.536058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:06.799 [2024-11-22 08:39:41.642702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.799 [2024-11-22 08:39:41.642733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.058 Running I/O for 5 seconds... 00:19:09.371 27200.00 IOPS, 106.25 MiB/s [2024-11-22T08:39:45.394Z] 26240.00 IOPS, 102.50 MiB/s [2024-11-22T08:39:46.329Z] 25770.67 IOPS, 100.67 MiB/s [2024-11-22T08:39:47.270Z] 25152.00 IOPS, 98.25 MiB/s [2024-11-22T08:39:47.270Z] 24704.00 IOPS, 96.50 MiB/s 00:19:12.183 Latency(us) 00:19:12.183 [2024-11-22T08:39:47.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.183 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0x80000 00:19:12.183 nvme0n1 : 5.07 1793.01 7.00 0.00 0.00 71280.13 7001.03 68641.72 00:19:12.183 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x80000 length 0x80000 00:19:12.183 nvme0n1 : 5.03 2009.27 7.85 0.00 0.00 63617.74 13580.95 85907.43 00:19:12.183 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0x80000 00:19:12.183 nvme0n2 : 5.06 1771.16 6.92 0.00 0.00 72069.73 14107.35 69905.07 00:19:12.183 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x80000 length 0x80000 00:19:12.183 nvme0n2 : 5.02 2013.22 7.86 0.00 0.00 63410.13 10422.59 89276.35 00:19:12.183 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0x80000 00:19:12.183 nvme0n3 : 5.06 1769.70 6.91 0.00 0.00 72039.57 15581.25 66957.26 00:19:12.183 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x80000 length 0x80000 00:19:12.183 nvme0n3 : 5.03 2008.66 7.85 0.00 0.00 63484.14 14002.07 66115.03 00:19:12.183 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0x20000 00:19:12.183 nvme1n1 : 5.07 1768.62 6.91 0.00 0.00 71994.85 9211.89 68220.61 00:19:12.183 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x20000 length 0x20000 00:19:12.183 nvme1n1 : 5.05 2027.37 7.92 0.00 0.00 62818.28 9685.64 76221.79 00:19:12.183 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0xbd0bd 00:19:12.183 nvme2n1 : 5.07 2684.02 10.48 0.00 0.00 47233.36 4737.54 70326.18 00:19:12.183 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:12.183 nvme2n1 : 5.04 2915.17 11.39 0.00 0.00 43615.60 5948.25 52849.91 00:19:12.183 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0x0 length 0xa0000 00:19:12.183 nvme3n1 : 5.06 1771.96 6.92 0.00 0.00 71545.04 9843.56 69483.95 00:19:12.183 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.183 Verification LBA range: start 0xa0000 length 0xa0000 00:19:12.183 nvme3n1 : 5.04 2031.19 7.93 0.00 0.00 62537.92 6527.28 64430.57 00:19:12.183 [2024-11-22T08:39:47.270Z] =================================================================================================================== 00:19:12.183 [2024-11-22T08:39:47.270Z] Total : 24563.35 95.95 0.00 0.00 62227.78 4737.54 89276.35 00:19:13.560 00:19:13.560 real 0m7.031s 00:19:13.560 user 0m10.647s 00:19:13.560 sys 0m2.051s 00:19:13.560 08:39:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.560 08:39:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:13.560 ************************************ 00:19:13.560 END TEST bdev_verify 00:19:13.560 ************************************ 00:19:13.560 08:39:48 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:13.560 08:39:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:13.560 08:39:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.560 08:39:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.560 ************************************ 00:19:13.560 START TEST bdev_verify_big_io 00:19:13.560 ************************************ 00:19:13.560 08:39:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:13.560 [2024-11-22 08:39:48.448201] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:13.560 [2024-11-22 08:39:48.448335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:19:13.560 [2024-11-22 08:39:48.628147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:13.819 [2024-11-22 08:39:48.738240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.819 [2024-11-22 08:39:48.738252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.386 Running I/O for 5 seconds... 00:19:19.701 2832.00 IOPS, 177.00 MiB/s [2024-11-22T08:39:55.355Z] 3884.00 IOPS, 242.75 MiB/s [2024-11-22T08:39:55.614Z] 3968.00 IOPS, 248.00 MiB/s 00:19:20.527 Latency(us) 00:19:20.527 [2024-11-22T08:39:55.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.527 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0x8000 00:19:20.527 nvme0n1 : 5.51 94.32 5.90 0.00 0.00 1285553.53 124650.00 1435159.44 00:19:20.527 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x8000 length 0x8000 00:19:20.527 nvme0n1 : 5.41 177.42 11.09 0.00 0.00 709831.21 95593.07 1529489.17 00:19:20.527 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0x8000 00:19:20.527 nvme0n2 : 5.66 114.54 7.16 0.00 0.00 998042.16 4474.35 1105005.39 00:19:20.527 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x8000 length 0x8000 00:19:20.527 nvme0n2 : 5.46 228.71 14.29 0.00 0.00 542141.18 74537.33 515444.59 00:19:20.527 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0x8000 00:19:20.527 nvme0n3 : 5.80 107.63 6.73 0.00 0.00 1039501.62 21897.97 1994399.97 00:19:20.527 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x8000 length 0x8000 00:19:20.527 nvme0n3 : 5.41 239.39 14.96 0.00 0.00 506951.88 5184.98 667045.94 00:19:20.527 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0x2000 00:19:20.527 nvme1n1 : 5.88 149.66 9.35 0.00 0.00 714944.88 59798.31 1873118.89 00:19:20.527 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x2000 length 0x2000 00:19:20.527 nvme1n1 : 5.46 228.60 14.29 0.00 0.00 524077.95 74537.33 495231.07 00:19:20.527 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0xbd0b 00:19:20.527 nvme2n1 : 6.09 186.62 11.66 0.00 0.00 558534.09 2763.57 2560378.35 00:19:20.527 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:20.527 nvme2n1 : 5.47 266.11 16.63 0.00 0.00 442562.90 8474.94 545764.86 00:19:20.527 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0x0 length 0xa000 00:19:20.527 nvme3n1 : 6.23 277.36 17.33 0.00 0.00 361648.14 463.88 2654708.07 00:19:20.527 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:20.527 Verification LBA range: start 0xa000 length 0xa000 00:19:20.527 nvme3n1 : 5.47 239.69 14.98 0.00 0.00 484490.90 4421.71 882656.75 00:19:20.527 [2024-11-22T08:39:55.614Z] =================================================================================================================== 00:19:20.527 [2024-11-22T08:39:55.614Z] Total : 2310.05 144.38 0.00 0.00 597545.58 463.88 2654708.07 00:19:21.906 00:19:21.906 real 0m8.509s 00:19:21.906 user 0m15.494s 00:19:21.906 sys 0m0.581s 00:19:21.906 08:39:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.906 08:39:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.906 ************************************ 00:19:21.906 END TEST bdev_verify_big_io 00:19:21.906 ************************************ 00:19:21.906 08:39:56 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.906 08:39:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:21.906 08:39:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.906 08:39:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.906 ************************************ 00:19:21.906 START TEST bdev_write_zeroes 00:19:21.906 ************************************ 00:19:21.907 08:39:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.166 [2024-11-22 08:39:57.059996] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:22.166 [2024-11-22 08:39:57.060162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74723 ] 00:19:22.426 [2024-11-22 08:39:57.257093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.426 [2024-11-22 08:39:57.363717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.993 Running I/O for 1 seconds... 00:19:23.931 53504.00 IOPS, 209.00 MiB/s 00:19:23.931 Latency(us) 00:19:23.931 [2024-11-22T08:39:59.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.931 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme0n1 : 1.03 8466.32 33.07 0.00 0.00 15106.13 8738.13 30741.38 00:19:23.931 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme0n2 : 1.03 8457.68 33.04 0.00 0.00 15112.98 8843.41 31162.50 00:19:23.931 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme0n3 : 1.03 8449.44 33.01 0.00 0.00 15117.04 8738.13 32215.29 00:19:23.931 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme1n1 : 1.03 8441.06 32.97 0.00 0.00 15122.32 8790.77 34952.53 00:19:23.931 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme2n1 : 1.03 10971.40 42.86 0.00 0.00 11624.91 4448.03 27161.91 00:19:23.931 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:23.931 nvme3n1 : 1.03 8556.54 33.42 0.00 0.00 14835.27 4184.83 33268.07 00:19:23.931 [2024-11-22T08:39:59.018Z] =================================================================================================================== 00:19:23.931 [2024-11-22T08:39:59.018Z] Total : 53342.43 208.37 0.00 0.00 14353.85 4184.83 34952.53 00:19:24.872 00:19:24.872 real 0m2.961s 00:19:24.872 user 0m2.188s 00:19:24.872 sys 0m0.584s 00:19:24.872 08:39:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.872 08:39:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:24.872 ************************************ 00:19:24.872 END TEST bdev_write_zeroes 00:19:24.872 ************************************ 00:19:25.132 08:39:59 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:25.132 08:39:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:25.132 08:39:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.132 08:39:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.132 ************************************ 00:19:25.132 START TEST bdev_json_nonenclosed 00:19:25.132 ************************************ 00:19:25.132 08:39:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:25.132 [2024-11-22 08:40:00.068372] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:25.132 [2024-11-22 08:40:00.068500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74782 ] 00:19:25.392 [2024-11-22 08:40:00.249491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.392 [2024-11-22 08:40:00.359819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.392 [2024-11-22 08:40:00.359929] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:25.392 [2024-11-22 08:40:00.359950] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:25.392 [2024-11-22 08:40:00.359963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:25.651 00:19:25.651 real 0m0.627s 00:19:25.651 user 0m0.377s 00:19:25.651 sys 0m0.145s 00:19:25.651 08:40:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.651 08:40:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 ************************************ 00:19:25.651 END TEST bdev_json_nonenclosed 00:19:25.651 ************************************ 00:19:25.651 08:40:00 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:25.651 08:40:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:25.651 08:40:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.651 08:40:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 ************************************ 00:19:25.651 START TEST bdev_json_nonarray 00:19:25.651 ************************************ 00:19:25.651 08:40:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:25.911 [2024-11-22 08:40:00.794049] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:25.911 [2024-11-22 08:40:00.794238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74806 ] 00:19:25.911 [2024-11-22 08:40:00.990407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.171 [2024-11-22 08:40:01.101593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.171 [2024-11-22 08:40:01.101698] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:26.171 [2024-11-22 08:40:01.101720] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:26.171 [2024-11-22 08:40:01.101732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:26.430 00:19:26.430 real 0m0.676s 00:19:26.430 user 0m0.406s 00:19:26.430 sys 0m0.164s 00:19:26.430 08:40:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.430 ************************************ 00:19:26.430 END TEST bdev_json_nonarray 00:19:26.430 ************************************ 00:19:26.430 08:40:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:26.430 08:40:01 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:27.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:31.573 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:31.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:31.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:31.573 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:31.573 00:19:31.573 real 0m58.762s 00:19:31.573 user 1m34.419s 00:19:31.573 sys 0m35.565s 00:19:31.573 08:40:05 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.573 08:40:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:31.573 ************************************ 00:19:31.573 END TEST blockdev_xnvme 00:19:31.573 ************************************ 00:19:31.573 08:40:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:31.573 08:40:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:31.573 08:40:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.573 08:40:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.573 ************************************ 00:19:31.573 START TEST ublk 00:19:31.573 ************************************ 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:31.573 * Looking for test storage... 00:19:31.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.573 08:40:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.573 08:40:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.573 08:40:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.573 08:40:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.573 08:40:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.573 08:40:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:31.573 08:40:06 ublk -- scripts/common.sh@345 -- # : 1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.573 08:40:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.573 08:40:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@353 -- # local d=1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.573 08:40:06 ublk -- scripts/common.sh@355 -- # echo 1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.573 08:40:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@353 -- # local d=2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.573 08:40:06 ublk -- scripts/common.sh@355 -- # echo 2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.573 08:40:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.573 08:40:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.573 08:40:06 ublk -- scripts/common.sh@368 -- # return 0 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.573 --rc genhtml_branch_coverage=1 00:19:31.573 --rc genhtml_function_coverage=1 00:19:31.573 --rc genhtml_legend=1 00:19:31.573 --rc geninfo_all_blocks=1 00:19:31.573 --rc geninfo_unexecuted_blocks=1 00:19:31.573 00:19:31.573 ' 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.573 --rc genhtml_branch_coverage=1 00:19:31.573 --rc genhtml_function_coverage=1 00:19:31.573 --rc genhtml_legend=1 00:19:31.573 --rc geninfo_all_blocks=1 00:19:31.573 --rc geninfo_unexecuted_blocks=1 00:19:31.573 00:19:31.573 ' 00:19:31.573 08:40:06 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.573 --rc genhtml_branch_coverage=1 00:19:31.573 --rc genhtml_function_coverage=1 00:19:31.573 --rc genhtml_legend=1 00:19:31.573 --rc geninfo_all_blocks=1 00:19:31.573 --rc geninfo_unexecuted_blocks=1 00:19:31.573 00:19:31.573 ' 00:19:31.574 08:40:06 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.574 --rc genhtml_branch_coverage=1 00:19:31.574 --rc genhtml_function_coverage=1 00:19:31.574 --rc genhtml_legend=1 00:19:31.574 --rc geninfo_all_blocks=1 00:19:31.574 --rc geninfo_unexecuted_blocks=1 00:19:31.574 00:19:31.574 ' 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:31.574 08:40:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:31.574 08:40:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:31.574 08:40:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:31.574 08:40:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:31.574 08:40:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:31.574 08:40:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:31.574 08:40:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:31.574 08:40:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:31.574 08:40:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:31.574 08:40:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:31.574 08:40:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.574 08:40:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 ************************************ 00:19:31.574 START TEST test_save_ublk_config 00:19:31.574 ************************************ 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75104 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75104 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75104 ']' 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.574 08:40:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 [2024-11-22 08:40:06.355571] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:31.574 [2024-11-22 08:40:06.355715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75104 ] 00:19:31.574 [2024-11-22 08:40:06.538838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.574 [2024-11-22 08:40:06.654100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.548 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:32.548 [2024-11-22 08:40:07.563985] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:32.548 [2024-11-22 08:40:07.565167] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:32.807 malloc0 00:19:32.807 [2024-11-22 08:40:07.652126] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:32.807 [2024-11-22 08:40:07.652213] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:32.807 [2024-11-22 08:40:07.652226] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:32.807 [2024-11-22 08:40:07.652235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:32.807 [2024-11-22 08:40:07.661085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:32.807 [2024-11-22 08:40:07.661112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:32.807 [2024-11-22 08:40:07.667990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:32.807 [2024-11-22 08:40:07.668095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:32.807 [2024-11-22 08:40:07.684980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:32.807 0 00:19:32.807 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.807 08:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:32.807 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.807 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:33.067 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.067 08:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:33.067 "subsystems": [ 00:19:33.067 { 00:19:33.067 "subsystem": "fsdev", 00:19:33.067 "config": [ 00:19:33.067 { 00:19:33.067 "method": "fsdev_set_opts", 00:19:33.067 "params": { 00:19:33.067 "fsdev_io_pool_size": 65535, 00:19:33.067 "fsdev_io_cache_size": 256 00:19:33.067 } 00:19:33.067 } 00:19:33.067 ] 00:19:33.067 }, 00:19:33.067 { 00:19:33.067 "subsystem": "keyring", 00:19:33.067 "config": [] 00:19:33.067 }, 00:19:33.067 { 00:19:33.067 "subsystem": "iobuf", 00:19:33.067 "config": [ 00:19:33.067 { 00:19:33.067 "method": "iobuf_set_options", 00:19:33.067 "params": { 00:19:33.067 "small_pool_count": 8192, 00:19:33.067 "large_pool_count": 1024, 00:19:33.067 "small_bufsize": 8192, 00:19:33.067 "large_bufsize": 135168, 00:19:33.067 "enable_numa": false 00:19:33.067 } 00:19:33.067 } 00:19:33.067 ] 00:19:33.067 }, 00:19:33.067 { 00:19:33.067 "subsystem": "sock", 00:19:33.067 "config": [ 00:19:33.067 { 00:19:33.067 "method": "sock_set_default_impl", 00:19:33.067 "params": { 00:19:33.067 "impl_name": "posix" 00:19:33.067 } 00:19:33.067 }, 00:19:33.067 { 00:19:33.067 "method": "sock_impl_set_options", 00:19:33.067 "params": { 00:19:33.067 "impl_name": "ssl", 00:19:33.067 "recv_buf_size": 4096, 00:19:33.067 "send_buf_size": 4096, 00:19:33.067 "enable_recv_pipe": true, 00:19:33.067 "enable_quickack": false, 00:19:33.067 "enable_placement_id": 0, 00:19:33.067 "enable_zerocopy_send_server": true, 00:19:33.067 "enable_zerocopy_send_client": false, 00:19:33.067 "zerocopy_threshold": 0, 00:19:33.067 "tls_version": 0, 00:19:33.068 "enable_ktls": false 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "sock_impl_set_options", 00:19:33.068 "params": { 00:19:33.068 "impl_name": "posix", 00:19:33.068 "recv_buf_size": 2097152, 00:19:33.068 "send_buf_size": 2097152, 00:19:33.068 "enable_recv_pipe": true, 00:19:33.068 "enable_quickack": false, 00:19:33.068 "enable_placement_id": 0, 00:19:33.068 "enable_zerocopy_send_server": true, 00:19:33.068 "enable_zerocopy_send_client": false, 00:19:33.068 "zerocopy_threshold": 0, 00:19:33.068 "tls_version": 0, 00:19:33.068 "enable_ktls": false 00:19:33.068 } 00:19:33.068 } 00:19:33.068 ] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "vmd", 00:19:33.068 "config": [] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "accel", 00:19:33.068 "config": [ 00:19:33.068 { 00:19:33.068 "method": "accel_set_options", 00:19:33.068 "params": { 00:19:33.068 "small_cache_size": 128, 00:19:33.068 "large_cache_size": 16, 00:19:33.068 "task_count": 2048, 00:19:33.068 "sequence_count": 2048, 00:19:33.068 "buf_count": 2048 00:19:33.068 } 00:19:33.068 } 00:19:33.068 ] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "bdev", 00:19:33.068 "config": [ 00:19:33.068 { 00:19:33.068 "method": "bdev_set_options", 00:19:33.068 "params": { 00:19:33.068 "bdev_io_pool_size": 65535, 00:19:33.068 "bdev_io_cache_size": 256, 00:19:33.068 "bdev_auto_examine": true, 00:19:33.068 "iobuf_small_cache_size": 128, 00:19:33.068 "iobuf_large_cache_size": 16 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_raid_set_options", 00:19:33.068 "params": { 00:19:33.068 "process_window_size_kb": 1024, 00:19:33.068 "process_max_bandwidth_mb_sec": 0 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_iscsi_set_options", 00:19:33.068 "params": { 00:19:33.068 "timeout_sec": 30 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_nvme_set_options", 00:19:33.068 "params": { 00:19:33.068 "action_on_timeout": "none", 00:19:33.068 "timeout_us": 0, 00:19:33.068 "timeout_admin_us": 0, 00:19:33.068 "keep_alive_timeout_ms": 10000, 00:19:33.068 "arbitration_burst": 0, 00:19:33.068 "low_priority_weight": 0, 00:19:33.068 "medium_priority_weight": 0, 00:19:33.068 "high_priority_weight": 0, 00:19:33.068 "nvme_adminq_poll_period_us": 10000, 00:19:33.068 "nvme_ioq_poll_period_us": 0, 00:19:33.068 "io_queue_requests": 0, 00:19:33.068 "delay_cmd_submit": true, 00:19:33.068 "transport_retry_count": 4, 00:19:33.068 "bdev_retry_count": 3, 00:19:33.068 "transport_ack_timeout": 0, 00:19:33.068 "ctrlr_loss_timeout_sec": 0, 00:19:33.068 "reconnect_delay_sec": 0, 00:19:33.068 "fast_io_fail_timeout_sec": 0, 00:19:33.068 "disable_auto_failback": false, 00:19:33.068 "generate_uuids": false, 00:19:33.068 "transport_tos": 0, 00:19:33.068 "nvme_error_stat": false, 00:19:33.068 "rdma_srq_size": 0, 00:19:33.068 "io_path_stat": false, 00:19:33.068 "allow_accel_sequence": false, 00:19:33.068 "rdma_max_cq_size": 0, 00:19:33.068 "rdma_cm_event_timeout_ms": 0, 00:19:33.068 "dhchap_digests": [ 00:19:33.068 "sha256", 00:19:33.068 "sha384", 00:19:33.068 "sha512" 00:19:33.068 ], 00:19:33.068 "dhchap_dhgroups": [ 00:19:33.068 "null", 00:19:33.068 "ffdhe2048", 00:19:33.068 "ffdhe3072", 00:19:33.068 "ffdhe4096", 00:19:33.068 "ffdhe6144", 00:19:33.068 "ffdhe8192" 00:19:33.068 ] 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_nvme_set_hotplug", 00:19:33.068 "params": { 00:19:33.068 "period_us": 100000, 00:19:33.068 "enable": false 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_malloc_create", 00:19:33.068 "params": { 00:19:33.068 "name": "malloc0", 00:19:33.068 "num_blocks": 8192, 00:19:33.068 "block_size": 4096, 00:19:33.068 "physical_block_size": 4096, 00:19:33.068 "uuid": "dddf3efd-446d-4a98-a035-c8001155218e", 00:19:33.068 "optimal_io_boundary": 0, 00:19:33.068 "md_size": 0, 00:19:33.068 "dif_type": 0, 00:19:33.068 "dif_is_head_of_md": false, 00:19:33.068 "dif_pi_format": 0 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "bdev_wait_for_examine" 00:19:33.068 } 00:19:33.068 ] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "scsi", 00:19:33.068 "config": null 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "scheduler", 00:19:33.068 "config": [ 00:19:33.068 { 00:19:33.068 "method": "framework_set_scheduler", 00:19:33.068 "params": { 00:19:33.068 "name": "static" 00:19:33.068 } 00:19:33.068 } 00:19:33.068 ] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "vhost_scsi", 00:19:33.068 "config": [] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "vhost_blk", 00:19:33.068 "config": [] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "ublk", 00:19:33.068 "config": [ 00:19:33.068 { 00:19:33.068 "method": "ublk_create_target", 00:19:33.068 "params": { 00:19:33.068 "cpumask": "1" 00:19:33.068 } 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "method": "ublk_start_disk", 00:19:33.068 "params": { 00:19:33.068 "bdev_name": "malloc0", 00:19:33.068 "ublk_id": 0, 00:19:33.068 "num_queues": 1, 00:19:33.068 "queue_depth": 128 00:19:33.068 } 00:19:33.068 } 00:19:33.068 ] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "nbd", 00:19:33.068 "config": [] 00:19:33.068 }, 00:19:33.068 { 00:19:33.068 "subsystem": "nvmf", 00:19:33.068 "config": [ 00:19:33.068 { 00:19:33.068 "method": "nvmf_set_config", 00:19:33.068 "params": { 00:19:33.068 "discovery_filter": "match_any", 00:19:33.068 "admin_cmd_passthru": { 00:19:33.068 "identify_ctrlr": false 00:19:33.068 }, 00:19:33.069 "dhchap_digests": [ 00:19:33.069 "sha256", 00:19:33.069 "sha384", 00:19:33.069 "sha512" 00:19:33.069 ], 00:19:33.069 "dhchap_dhgroups": [ 00:19:33.069 "null", 00:19:33.069 "ffdhe2048", 00:19:33.069 "ffdhe3072", 00:19:33.069 "ffdhe4096", 00:19:33.069 "ffdhe6144", 00:19:33.069 "ffdhe8192" 00:19:33.069 ] 00:19:33.069 } 00:19:33.069 }, 00:19:33.069 { 00:19:33.069 "method": "nvmf_set_max_subsystems", 00:19:33.069 "params": { 00:19:33.069 "max_subsystems": 1024 00:19:33.069 } 00:19:33.069 }, 00:19:33.069 { 00:19:33.069 "method": "nvmf_set_crdt", 00:19:33.069 "params": { 00:19:33.069 "crdt1": 0, 00:19:33.069 "crdt2": 0, 00:19:33.069 "crdt3": 0 00:19:33.069 } 00:19:33.069 } 00:19:33.069 ] 00:19:33.069 }, 00:19:33.069 { 00:19:33.069 "subsystem": "iscsi", 00:19:33.069 "config": [ 00:19:33.069 { 00:19:33.069 "method": "iscsi_set_options", 00:19:33.069 "params": { 00:19:33.069 "node_base": "iqn.2016-06.io.spdk", 00:19:33.069 "max_sessions": 128, 00:19:33.069 "max_connections_per_session": 2, 00:19:33.069 "max_queue_depth": 64, 00:19:33.069 "default_time2wait": 2, 00:19:33.069 "default_time2retain": 20, 00:19:33.069 "first_burst_length": 8192, 00:19:33.069 "immediate_data": true, 00:19:33.069 "allow_duplicated_isid": false, 00:19:33.069 "error_recovery_level": 0, 00:19:33.069 "nop_timeout": 60, 00:19:33.069 "nop_in_interval": 30, 00:19:33.069 "disable_chap": false, 00:19:33.069 "require_chap": false, 00:19:33.069 "mutual_chap": false, 00:19:33.069 "chap_group": 0, 00:19:33.069 "max_large_datain_per_connection": 64, 00:19:33.069 "max_r2t_per_connection": 4, 00:19:33.069 "pdu_pool_size": 36864, 00:19:33.069 "immediate_data_pool_size": 16384, 00:19:33.069 "data_out_pool_size": 2048 00:19:33.069 } 00:19:33.069 } 00:19:33.069 ] 00:19:33.069 } 00:19:33.069 ] 00:19:33.069 }' 00:19:33.069 08:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75104 00:19:33.069 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75104 ']' 00:19:33.069 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75104 00:19:33.069 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:33.069 08:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75104 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.069 killing process with pid 75104 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75104' 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75104 00:19:33.069 08:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75104 00:19:34.556 [2024-11-22 08:40:09.453412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:34.556 [2024-11-22 08:40:09.489053] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:34.556 [2024-11-22 08:40:09.489191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:34.556 [2024-11-22 08:40:09.490183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:34.556 [2024-11-22 08:40:09.490233] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:34.556 [2024-11-22 08:40:09.490248] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:34.556 [2024-11-22 08:40:09.490273] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:34.556 [2024-11-22 08:40:09.490409] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75170 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75170 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75170 ']' 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:36.467 08:40:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:36.467 "subsystems": [ 00:19:36.467 { 00:19:36.467 "subsystem": "fsdev", 00:19:36.467 "config": [ 00:19:36.467 { 00:19:36.467 "method": "fsdev_set_opts", 00:19:36.467 "params": { 00:19:36.467 "fsdev_io_pool_size": 65535, 00:19:36.467 "fsdev_io_cache_size": 256 00:19:36.467 } 00:19:36.467 } 00:19:36.467 ] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "keyring", 00:19:36.467 "config": [] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "iobuf", 00:19:36.467 "config": [ 00:19:36.467 { 00:19:36.467 "method": "iobuf_set_options", 00:19:36.467 "params": { 00:19:36.467 "small_pool_count": 8192, 00:19:36.467 "large_pool_count": 1024, 00:19:36.467 "small_bufsize": 8192, 00:19:36.467 "large_bufsize": 135168, 00:19:36.467 "enable_numa": false 00:19:36.467 } 00:19:36.467 } 00:19:36.467 ] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "sock", 00:19:36.467 "config": [ 00:19:36.467 { 00:19:36.467 "method": "sock_set_default_impl", 00:19:36.467 "params": { 00:19:36.467 "impl_name": "posix" 00:19:36.467 } 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "method": "sock_impl_set_options", 00:19:36.467 "params": { 00:19:36.467 "impl_name": "ssl", 00:19:36.467 "recv_buf_size": 4096, 00:19:36.467 "send_buf_size": 4096, 00:19:36.467 "enable_recv_pipe": true, 00:19:36.467 "enable_quickack": false, 00:19:36.467 "enable_placement_id": 0, 00:19:36.467 "enable_zerocopy_send_server": true, 00:19:36.467 "enable_zerocopy_send_client": false, 00:19:36.467 "zerocopy_threshold": 0, 00:19:36.467 "tls_version": 0, 00:19:36.467 "enable_ktls": false 00:19:36.467 } 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "method": "sock_impl_set_options", 00:19:36.467 "params": { 00:19:36.467 "impl_name": "posix", 00:19:36.467 "recv_buf_size": 2097152, 00:19:36.467 "send_buf_size": 2097152, 00:19:36.467 "enable_recv_pipe": true, 00:19:36.467 "enable_quickack": false, 00:19:36.467 "enable_placement_id": 0, 00:19:36.467 "enable_zerocopy_send_server": true, 00:19:36.467 "enable_zerocopy_send_client": false, 00:19:36.467 "zerocopy_threshold": 0, 00:19:36.467 "tls_version": 0, 00:19:36.467 "enable_ktls": false 00:19:36.467 } 00:19:36.467 } 00:19:36.467 ] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "vmd", 00:19:36.467 "config": [] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "accel", 00:19:36.467 "config": [ 00:19:36.467 { 00:19:36.467 "method": "accel_set_options", 00:19:36.467 "params": { 00:19:36.467 "small_cache_size": 128, 00:19:36.467 "large_cache_size": 16, 00:19:36.467 "task_count": 2048, 00:19:36.467 "sequence_count": 2048, 00:19:36.467 "buf_count": 2048 00:19:36.467 } 00:19:36.467 } 00:19:36.467 ] 00:19:36.467 }, 00:19:36.467 { 00:19:36.467 "subsystem": "bdev", 00:19:36.467 "config": [ 00:19:36.467 { 00:19:36.467 "method": "bdev_set_options", 00:19:36.467 "params": { 00:19:36.467 "bdev_io_pool_size": 65535, 00:19:36.467 "bdev_io_cache_size": 256, 00:19:36.467 "bdev_auto_examine": true, 00:19:36.467 "iobuf_small_cache_size": 128, 00:19:36.467 "iobuf_large_cache_size": 16 00:19:36.467 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_raid_set_options", 00:19:36.468 "params": { 00:19:36.468 "process_window_size_kb": 1024, 00:19:36.468 "process_max_bandwidth_mb_sec": 0 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_iscsi_set_options", 00:19:36.468 "params": { 00:19:36.468 "timeout_sec": 30 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_nvme_set_options", 00:19:36.468 "params": { 00:19:36.468 "action_on_timeout": "none", 00:19:36.468 "timeout_us": 0, 00:19:36.468 "timeout_admin_us": 0, 00:19:36.468 "keep_alive_timeout_ms": 10000, 00:19:36.468 "arbitration_burst": 0, 00:19:36.468 "low_priority_weight": 0, 00:19:36.468 "medium_priority_weight": 0, 00:19:36.468 "high_priority_weight": 0, 00:19:36.468 "nvme_adminq_poll_period_us": 10000, 00:19:36.468 "nvme_ioq_poll_period_us": 0, 00:19:36.468 "io_queue_requests": 0, 00:19:36.468 "delay_cmd_submit": true, 00:19:36.468 "transport_retry_count": 4, 00:19:36.468 "bdev_retry_count": 3, 00:19:36.468 "transport_ack_timeout": 0, 00:19:36.468 "ctrlr_loss_timeout_sec": 0, 00:19:36.468 "reconnect_delay_sec": 0, 00:19:36.468 "fast_io_fail_timeout_sec": 0, 00:19:36.468 "disable_auto_failback": false, 00:19:36.468 "generate_uuids": false, 00:19:36.468 "transport_tos": 0, 00:19:36.468 "nvme_error_stat": false, 00:19:36.468 "rdma_srq_size": 0, 00:19:36.468 "io_path_stat": false, 00:19:36.468 "allow_accel_sequence": false, 00:19:36.468 "rdma_max_cq_size": 0, 00:19:36.468 "rdma_cm_event_timeout_ms": 0, 00:19:36.468 "dhchap_digests": [ 00:19:36.468 "sha256", 00:19:36.468 "sha384", 00:19:36.468 "sha512" 00:19:36.468 ], 00:19:36.468 "dhchap_dhgroups": [ 00:19:36.468 "null", 00:19:36.468 "ffdhe2048", 00:19:36.468 "ffdhe3072", 00:19:36.468 "ffdhe4096", 00:19:36.468 "ffdhe6144", 00:19:36.468 "ffdhe8192" 00:19:36.468 ] 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_nvme_set_hotplug", 00:19:36.468 "params": { 00:19:36.468 "period_us": 100000, 00:19:36.468 "enable": false 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_malloc_create", 00:19:36.468 "params": { 00:19:36.468 "name": "malloc0", 00:19:36.468 "num_blocks": 8192, 00:19:36.468 "block_size": 4096, 00:19:36.468 "physical_block_size": 4096, 00:19:36.468 "uuid": "dddf3efd-446d-4a98-a035-c8001155218e", 00:19:36.468 "optimal_io_boundary": 0, 00:19:36.468 "md_size": 0, 00:19:36.468 "dif_type": 0, 00:19:36.468 "dif_is_head_of_md": false, 00:19:36.468 "dif_pi_format": 0 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "bdev_wait_for_examine" 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "scsi", 00:19:36.468 "config": null 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "scheduler", 00:19:36.468 "config": [ 00:19:36.468 { 00:19:36.468 "method": "framework_set_scheduler", 00:19:36.468 "params": { 00:19:36.468 "name": "static" 00:19:36.468 } 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "vhost_scsi", 00:19:36.468 "config": [] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "vhost_blk", 00:19:36.468 "config": [] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "ublk", 00:19:36.468 "config": [ 00:19:36.468 { 00:19:36.468 "method": "ublk_create_target", 00:19:36.468 "params": { 00:19:36.468 "cpumask": "1" 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "ublk_start_disk", 00:19:36.468 "params": { 00:19:36.468 "bdev_name": "malloc0", 00:19:36.468 "ublk_id": 0, 00:19:36.468 "num_queues": 1, 00:19:36.468 "queue_depth": 128 00:19:36.468 } 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "nbd", 00:19:36.468 "config": [] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "nvmf", 00:19:36.468 "config": [ 00:19:36.468 { 00:19:36.468 "method": "nvmf_set_config", 00:19:36.468 "params": { 00:19:36.468 "discovery_filter": "match_any", 00:19:36.468 "admin_cmd_passthru": { 00:19:36.468 "identify_ctrlr": false 00:19:36.468 }, 00:19:36.468 "dhchap_digests": [ 00:19:36.468 "sha256", 00:19:36.468 "sha384", 00:19:36.468 "sha512" 00:19:36.468 ], 00:19:36.468 "dhchap_dhgroups": [ 00:19:36.468 "null", 00:19:36.468 "ffdhe2048", 00:19:36.468 "ffdhe3072", 00:19:36.468 "ffdhe4096", 00:19:36.468 "ffdhe6144", 00:19:36.468 "ffdhe8192" 00:19:36.468 ] 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "nvmf_set_max_subsystems", 00:19:36.468 "params": { 00:19:36.468 "max_subsystems": 1024 00:19:36.468 } 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "method": "nvmf_set_crdt", 00:19:36.468 "params": { 00:19:36.468 "crdt1": 0, 00:19:36.468 "crdt2": 0, 00:19:36.468 "crdt3": 0 00:19:36.468 } 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 }, 00:19:36.468 { 00:19:36.468 "subsystem": "iscsi", 00:19:36.468 "config": [ 00:19:36.468 { 00:19:36.468 "method": "iscsi_set_options", 00:19:36.468 "params": { 00:19:36.468 "node_base": "iqn.2016-06.io.spdk", 00:19:36.468 "max_sessions": 128, 00:19:36.468 "max_connections_per_session": 2, 00:19:36.468 "max_queue_depth": 64, 00:19:36.468 "default_time2wait": 2, 00:19:36.468 "default_time2retain": 20, 00:19:36.468 "first_burst_length": 8192, 00:19:36.468 "immediate_data": true, 00:19:36.468 "allow_duplicated_isid": false, 00:19:36.468 "error_recovery_level": 0, 00:19:36.468 "nop_timeout": 60, 00:19:36.468 "nop_in_interval": 30, 00:19:36.468 "disable_chap": false, 00:19:36.468 "require_chap": false, 00:19:36.468 "mutual_chap": false, 00:19:36.468 "chap_group": 0, 00:19:36.468 "max_large_datain_per_connection": 64, 00:19:36.468 "max_r2t_per_connection": 4, 00:19:36.468 "pdu_pool_size": 36864, 00:19:36.468 "immediate_data_pool_size": 16384, 00:19:36.468 "data_out_pool_size": 2048 00:19:36.468 } 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 } 00:19:36.468 ] 00:19:36.468 }' 00:19:36.468 [2024-11-22 08:40:11.436741] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:36.468 [2024-11-22 08:40:11.436909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75170 ] 00:19:36.728 [2024-11-22 08:40:11.620936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.728 [2024-11-22 08:40:11.734295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.667 [2024-11-22 08:40:12.720972] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:37.667 [2024-11-22 08:40:12.722206] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:37.667 [2024-11-22 08:40:12.729105] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:37.667 [2024-11-22 08:40:12.729182] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:37.667 [2024-11-22 08:40:12.729194] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:37.667 [2024-11-22 08:40:12.729203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:37.667 [2024-11-22 08:40:12.738108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:37.667 [2024-11-22 08:40:12.738130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:37.667 [2024-11-22 08:40:12.744987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:37.667 [2024-11-22 08:40:12.745082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:37.927 [2024-11-22 08:40:12.762001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75170 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75170 ']' 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75170 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75170 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.927 killing process with pid 75170 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75170' 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75170 00:19:37.927 08:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75170 00:19:39.309 [2024-11-22 08:40:14.387081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:39.570 [2024-11-22 08:40:14.418040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:39.570 [2024-11-22 08:40:14.418156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:39.570 [2024-11-22 08:40:14.424985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:39.570 [2024-11-22 08:40:14.425032] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:39.570 [2024-11-22 08:40:14.425041] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:39.570 [2024-11-22 08:40:14.425067] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:39.570 [2024-11-22 08:40:14.425212] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:41.478 08:40:16 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:41.478 00:19:41.478 real 0m9.957s 00:19:41.478 user 0m7.559s 00:19:41.478 sys 0m3.107s 00:19:41.478 08:40:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.478 08:40:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:41.478 ************************************ 00:19:41.478 END TEST test_save_ublk_config 00:19:41.478 ************************************ 00:19:41.478 08:40:16 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75262 00:19:41.478 08:40:16 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:41.478 08:40:16 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.478 08:40:16 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75262 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@835 -- # '[' -z 75262 ']' 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.478 08:40:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.478 [2024-11-22 08:40:16.368407] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:19:41.478 [2024-11-22 08:40:16.368529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75262 ] 00:19:41.478 [2024-11-22 08:40:16.550599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.737 [2024-11-22 08:40:16.656295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.737 [2024-11-22 08:40:16.656330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.676 08:40:17 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.676 08:40:17 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:42.676 08:40:17 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:42.676 08:40:17 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.676 08:40:17 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.676 08:40:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.676 ************************************ 00:19:42.676 START TEST test_create_ublk 00:19:42.676 ************************************ 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:42.676 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.676 [2024-11-22 08:40:17.528978] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:42.676 [2024-11-22 08:40:17.531330] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.676 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:42.676 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.676 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.935 [2024-11-22 08:40:17.807132] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:42.935 [2024-11-22 08:40:17.807560] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:42.935 [2024-11-22 08:40:17.807581] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:42.935 [2024-11-22 08:40:17.807590] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:42.935 [2024-11-22 08:40:17.815008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:42.935 [2024-11-22 08:40:17.815033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:42.935 [2024-11-22 08:40:17.822995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:42.935 [2024-11-22 08:40:17.833033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:42.935 [2024-11-22 08:40:17.845995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.935 08:40:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:42.935 { 00:19:42.935 "ublk_device": "/dev/ublkb0", 00:19:42.935 "id": 0, 00:19:42.935 "queue_depth": 512, 00:19:42.935 "num_queues": 4, 00:19:42.935 "bdev_name": "Malloc0" 00:19:42.935 } 00:19:42.935 ]' 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:42.935 08:40:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:43.195 08:40:18 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:43.195 08:40:18 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:43.195 fio: verification read phase will never start because write phase uses all of runtime 00:19:43.195 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:43.195 fio-3.35 00:19:43.195 Starting 1 process 00:19:55.427 00:19:55.427 fio_test: (groupid=0, jobs=1): err= 0: pid=75310: Fri Nov 22 08:40:28 2024 00:19:55.427 write: IOPS=16.2k, BW=63.2MiB/s (66.2MB/s)(632MiB/10003msec); 0 zone resets 00:19:55.427 clat (usec): min=37, max=4027, avg=61.04, stdev=99.98 00:19:55.427 lat (usec): min=37, max=4028, avg=61.50, stdev=99.99 00:19:55.427 clat percentiles (usec): 00:19:55.427 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:19:55.427 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:19:55.427 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 66], 00:19:55.427 | 99.00th=[ 75], 99.50th=[ 82], 99.90th=[ 2114], 99.95th=[ 2835], 00:19:55.427 | 99.99th=[ 3556] 00:19:55.427 bw ( KiB/s): min=63456, max=65480, per=100.00%, avg=64700.21, stdev=526.51, samples=19 00:19:55.427 iops : min=15864, max=16370, avg=16175.16, stdev=131.59, samples=19 00:19:55.427 lat (usec) : 50=2.66%, 100=97.07%, 250=0.07%, 500=0.01%, 750=0.01% 00:19:55.427 lat (usec) : 1000=0.02% 00:19:55.427 lat (msec) : 2=0.07%, 4=0.11%, 10=0.01% 00:19:55.427 cpu : usr=3.14%, sys=10.19%, ctx=161775, majf=0, minf=797 00:19:55.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.427 issued rwts: total=0,161764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.427 00:19:55.427 Run status group 0 (all jobs): 00:19:55.427 WRITE: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=632MiB (663MB), run=10003-10003msec 00:19:55.427 00:19:55.427 Disk stats (read/write): 00:19:55.427 ublkb0: ios=0/160068, merge=0/0, ticks=0/8609, in_queue=8610, util=98.90% 00:19:55.427 08:40:28 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 [2024-11-22 08:40:28.358822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:55.427 [2024-11-22 08:40:28.400007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:55.427 [2024-11-22 08:40:28.400748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:55.427 [2024-11-22 08:40:28.411011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:55.427 [2024-11-22 08:40:28.411265] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:55.427 [2024-11-22 08:40:28.411279] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:28 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 [2024-11-22 08:40:28.434046] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:55.427 request: 00:19:55.427 { 00:19:55.427 "ublk_id": 0, 00:19:55.427 "method": "ublk_stop_disk", 00:19:55.427 "req_id": 1 00:19:55.427 } 00:19:55.427 Got JSON-RPC error response 00:19:55.427 response: 00:19:55.427 { 00:19:55.427 "code": -19, 00:19:55.427 "message": "No such device" 00:19:55.427 } 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.427 08:40:28 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 [2024-11-22 08:40:28.457070] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:55.427 [2024-11-22 08:40:28.467978] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:55.427 [2024-11-22 08:40:28.468022] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:28 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:29 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:55.427 08:40:29 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:55.427 00:19:55.427 real 0m11.760s 00:19:55.427 user 0m0.709s 00:19:55.427 sys 0m1.155s 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.427 08:40:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 ************************************ 00:19:55.427 END TEST test_create_ublk 00:19:55.427 ************************************ 00:19:55.427 08:40:29 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:55.427 08:40:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:55.427 08:40:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.427 08:40:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 ************************************ 00:19:55.427 START TEST test_create_multi_ublk 00:19:55.427 ************************************ 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.427 [2024-11-22 08:40:29.365972] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:55.427 [2024-11-22 08:40:29.368349] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.427 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 [2024-11-22 08:40:29.643112] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:55.428 [2024-11-22 08:40:29.643580] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:55.428 [2024-11-22 08:40:29.643599] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:55.428 [2024-11-22 08:40:29.643621] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:55.428 [2024-11-22 08:40:29.650999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:55.428 [2024-11-22 08:40:29.651027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:55.428 [2024-11-22 08:40:29.659002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:55.428 [2024-11-22 08:40:29.659596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:55.428 [2024-11-22 08:40:29.669020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 [2024-11-22 08:40:29.953112] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:55.428 [2024-11-22 08:40:29.953573] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:55.428 [2024-11-22 08:40:29.953593] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:55.428 [2024-11-22 08:40:29.953601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:55.428 [2024-11-22 08:40:29.959990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:55.428 [2024-11-22 08:40:29.960014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:55.428 [2024-11-22 08:40:29.967999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:55.428 [2024-11-22 08:40:29.968604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:55.428 [2024-11-22 08:40:29.991995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:55.428 08:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 [2024-11-22 08:40:30.288106] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:55.428 [2024-11-22 08:40:30.288559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:55.428 [2024-11-22 08:40:30.288581] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:55.428 [2024-11-22 08:40:30.288592] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:55.428 [2024-11-22 08:40:30.296274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:55.428 [2024-11-22 08:40:30.296304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:55.428 [2024-11-22 08:40:30.303979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:55.428 [2024-11-22 08:40:30.304544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:55.428 [2024-11-22 08:40:30.313022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.428 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.688 [2024-11-22 08:40:30.606124] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:55.688 [2024-11-22 08:40:30.606599] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:55.688 [2024-11-22 08:40:30.606623] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:55.688 [2024-11-22 08:40:30.606631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:55.688 [2024-11-22 08:40:30.614017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:55.688 [2024-11-22 08:40:30.614039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:55.688 [2024-11-22 08:40:30.622000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:55.688 [2024-11-22 08:40:30.622610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:55.688 [2024-11-22 08:40:30.625736] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.688 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:55.689 { 00:19:55.689 "ublk_device": "/dev/ublkb0", 00:19:55.689 "id": 0, 00:19:55.689 "queue_depth": 512, 00:19:55.689 "num_queues": 4, 00:19:55.689 "bdev_name": "Malloc0" 00:19:55.689 }, 00:19:55.689 { 00:19:55.689 "ublk_device": "/dev/ublkb1", 00:19:55.689 "id": 1, 00:19:55.689 "queue_depth": 512, 00:19:55.689 "num_queues": 4, 00:19:55.689 "bdev_name": "Malloc1" 00:19:55.689 }, 00:19:55.689 { 00:19:55.689 "ublk_device": "/dev/ublkb2", 00:19:55.689 "id": 2, 00:19:55.689 "queue_depth": 512, 00:19:55.689 "num_queues": 4, 00:19:55.689 "bdev_name": "Malloc2" 00:19:55.689 }, 00:19:55.689 { 00:19:55.689 "ublk_device": "/dev/ublkb3", 00:19:55.689 "id": 3, 00:19:55.689 "queue_depth": 512, 00:19:55.689 "num_queues": 4, 00:19:55.689 "bdev_name": "Malloc3" 00:19:55.689 } 00:19:55.689 ]' 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:55.689 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:55.948 08:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:55.948 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:55.948 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:56.208 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.467 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:56.467 [2024-11-22 08:40:31.525091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:56.726 [2024-11-22 08:40:31.571005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:56.726 [2024-11-22 08:40:31.571895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:56.726 [2024-11-22 08:40:31.579097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:56.726 [2024-11-22 08:40:31.579379] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:56.727 [2024-11-22 08:40:31.579393] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:56.727 [2024-11-22 08:40:31.595075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:56.727 [2024-11-22 08:40:31.623372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:56.727 [2024-11-22 08:40:31.624447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:56.727 [2024-11-22 08:40:31.635003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:56.727 [2024-11-22 08:40:31.635277] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:56.727 [2024-11-22 08:40:31.635291] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:56.727 [2024-11-22 08:40:31.651105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:56.727 [2024-11-22 08:40:31.687379] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:56.727 [2024-11-22 08:40:31.688374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:56.727 [2024-11-22 08:40:31.693999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:56.727 [2024-11-22 08:40:31.694247] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:56.727 [2024-11-22 08:40:31.694264] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:56.727 [2024-11-22 08:40:31.708093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:56.727 [2024-11-22 08:40:31.743396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:56.727 [2024-11-22 08:40:31.744338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:56.727 [2024-11-22 08:40:31.749987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:56.727 [2024-11-22 08:40:31.750234] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:56.727 [2024-11-22 08:40:31.750246] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.727 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:56.986 [2024-11-22 08:40:31.958054] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:56.986 [2024-11-22 08:40:31.965979] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:56.986 [2024-11-22 08:40:31.966035] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:56.986 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:56.986 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:56.986 08:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:56.986 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.986 08:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:57.924 08:40:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.924 08:40:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:57.924 08:40:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:57.924 08:40:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.924 08:40:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.183 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.183 08:40:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:58.183 08:40:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:58.183 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.183 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.442 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.442 08:40:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:58.442 08:40:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:58.442 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.442 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:58.701 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:58.960 00:19:58.960 real 0m4.520s 00:19:58.960 user 0m1.021s 00:19:58.960 sys 0m0.245s 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.960 08:40:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.960 ************************************ 00:19:58.960 END TEST test_create_multi_ublk 00:19:58.960 ************************************ 00:19:58.960 08:40:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:58.960 08:40:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:58.960 08:40:33 ublk -- ublk/ublk.sh@130 -- # killprocess 75262 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 75262 ']' 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@958 -- # kill -0 75262 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@959 -- # uname 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75262 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75262' 00:19:58.960 killing process with pid 75262 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@973 -- # kill 75262 00:19:58.960 08:40:33 ublk -- common/autotest_common.sh@978 -- # wait 75262 00:20:00.335 [2024-11-22 08:40:35.064655] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:00.335 [2024-11-22 08:40:35.064708] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:01.272 00:20:01.272 real 0m30.247s 00:20:01.272 user 0m43.440s 00:20:01.272 sys 0m10.257s 00:20:01.272 08:40:36 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.272 08:40:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:01.272 ************************************ 00:20:01.272 END TEST ublk 00:20:01.272 ************************************ 00:20:01.272 08:40:36 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:01.272 08:40:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.272 08:40:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.272 08:40:36 -- common/autotest_common.sh@10 -- # set +x 00:20:01.272 ************************************ 00:20:01.272 START TEST ublk_recovery 00:20:01.272 ************************************ 00:20:01.272 08:40:36 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:01.531 * Looking for test storage... 00:20:01.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:01.531 08:40:36 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:01.531 08:40:36 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:01.531 08:40:36 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:01.531 08:40:36 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:01.531 08:40:36 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.531 08:40:36 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.531 08:40:36 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.532 08:40:36 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.532 --rc genhtml_branch_coverage=1 00:20:01.532 --rc genhtml_function_coverage=1 00:20:01.532 --rc genhtml_legend=1 00:20:01.532 --rc geninfo_all_blocks=1 00:20:01.532 --rc geninfo_unexecuted_blocks=1 00:20:01.532 00:20:01.532 ' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.532 --rc genhtml_branch_coverage=1 00:20:01.532 --rc genhtml_function_coverage=1 00:20:01.532 --rc genhtml_legend=1 00:20:01.532 --rc geninfo_all_blocks=1 00:20:01.532 --rc geninfo_unexecuted_blocks=1 00:20:01.532 00:20:01.532 ' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.532 --rc genhtml_branch_coverage=1 00:20:01.532 --rc genhtml_function_coverage=1 00:20:01.532 --rc genhtml_legend=1 00:20:01.532 --rc geninfo_all_blocks=1 00:20:01.532 --rc geninfo_unexecuted_blocks=1 00:20:01.532 00:20:01.532 ' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.532 --rc genhtml_branch_coverage=1 00:20:01.532 --rc genhtml_function_coverage=1 00:20:01.532 --rc genhtml_legend=1 00:20:01.532 --rc geninfo_all_blocks=1 00:20:01.532 --rc geninfo_unexecuted_blocks=1 00:20:01.532 00:20:01.532 ' 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:01.532 08:40:36 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75691 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.532 08:40:36 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75691 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75691 ']' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.532 08:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.792 [2024-11-22 08:40:36.674227] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:20:01.792 [2024-11-22 08:40:36.674369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75691 ] 00:20:01.792 [2024-11-22 08:40:36.853889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:02.052 [2024-11-22 08:40:36.962786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.052 [2024-11-22 08:40:36.962820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:02.989 08:40:37 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.989 [2024-11-22 08:40:37.820976] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:02.989 [2024-11-22 08:40:37.823355] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.989 08:40:37 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.989 malloc0 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.989 08:40:37 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.989 08:40:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 [2024-11-22 08:40:37.971144] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:02.990 [2024-11-22 08:40:37.971258] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:02.990 [2024-11-22 08:40:37.971273] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:02.990 [2024-11-22 08:40:37.971284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:02.990 [2024-11-22 08:40:37.979044] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:02.990 [2024-11-22 08:40:37.979069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:02.990 [2024-11-22 08:40:37.986985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:02.990 [2024-11-22 08:40:37.987124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:02.990 [2024-11-22 08:40:38.001981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:02.990 1 00:20:02.990 08:40:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.990 08:40:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:04.364 08:40:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75726 00:20:04.364 08:40:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:04.364 08:40:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:04.364 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:04.364 fio-3.35 00:20:04.364 Starting 1 process 00:20:09.633 08:40:44 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75691 00:20:09.633 08:40:44 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:14.908 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75691 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:14.908 08:40:49 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75837 00:20:14.908 08:40:49 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:14.908 08:40:49 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.908 08:40:49 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75837 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75837 ']' 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.908 08:40:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.908 [2024-11-22 08:40:49.131632] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:20:14.908 [2024-11-22 08:40:49.131771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:20:14.908 [2024-11-22 08:40:49.311374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:14.908 [2024-11-22 08:40:49.417283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.908 [2024-11-22 08:40:49.417332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:15.474 08:40:50 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.474 [2024-11-22 08:40:50.264976] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:15.474 [2024-11-22 08:40:50.267722] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.474 08:40:50 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.474 malloc0 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.474 08:40:50 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.474 [2024-11-22 08:40:50.404129] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:15.474 [2024-11-22 08:40:50.404170] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:15.474 [2024-11-22 08:40:50.404183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:15.474 [2024-11-22 08:40:50.410987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:15.474 [2024-11-22 08:40:50.411014] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:15.474 [2024-11-22 08:40:50.411025] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:15.474 [2024-11-22 08:40:50.411109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:15.474 1 00:20:15.474 08:40:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.474 08:40:50 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75726 00:20:15.474 [2024-11-22 08:40:50.418981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:15.474 [2024-11-22 08:40:50.425433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:15.474 [2024-11-22 08:40:50.433171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:15.474 [2024-11-22 08:40:50.433193] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:11.805 00:21:11.805 fio_test: (groupid=0, jobs=1): err= 0: pid=75729: Fri Nov 22 08:41:39 2024 00:21:11.805 read: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(5432MiB/60002msec) 00:21:11.805 slat (nsec): min=1963, max=2863.4k, avg=6847.52, stdev=3483.09 00:21:11.805 clat (usec): min=940, max=6423.4k, avg=2702.67, stdev=42518.26 00:21:11.805 lat (usec): min=946, max=6423.4k, avg=2709.52, stdev=42518.27 00:21:11.805 clat percentiles (usec): 00:21:11.805 | 1.00th=[ 1909], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2180], 00:21:11.805 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2311], 00:21:11.805 | 70.00th=[ 2343], 80.00th=[ 2376], 90.00th=[ 2769], 95.00th=[ 3687], 00:21:11.805 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7046], 99.95th=[ 7832], 00:21:11.805 | 99.99th=[12649] 00:21:11.805 bw ( KiB/s): min=35128, max=108592, per=100.00%, avg=103123.65, stdev=10780.13, samples=107 00:21:11.805 iops : min= 8782, max=27148, avg=25780.87, stdev=2695.02, samples=107 00:21:11.805 write: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(5425MiB/60002msec); 0 zone resets 00:21:11.805 slat (nsec): min=1985, max=2895.2k, avg=6863.89, stdev=3440.83 00:21:11.805 clat (usec): min=917, max=6423.5k, avg=2808.75, stdev=44588.96 00:21:11.805 lat (usec): min=924, max=6423.6k, avg=2815.61, stdev=44588.96 00:21:11.805 clat percentiles (usec): 00:21:11.805 | 1.00th=[ 1926], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2278], 00:21:11.805 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:21:11.805 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2769], 95.00th=[ 3654], 00:21:11.805 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7111], 99.95th=[ 7963], 00:21:11.805 | 99.99th=[12780] 00:21:11.805 bw ( KiB/s): min=35800, max=107984, per=100.00%, avg=102995.35, stdev=10602.25, samples=107 00:21:11.805 iops : min= 8950, max=26996, avg=25748.79, stdev=2650.55, samples=107 00:21:11.805 lat (usec) : 1000=0.01% 00:21:11.805 lat (msec) : 2=2.63%, 4=93.78%, 10=3.57%, 20=0.01%, >=2000=0.01% 00:21:11.805 cpu : usr=11.55%, sys=31.55%, ctx=118374, majf=0, minf=13 00:21:11.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:11.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:11.805 issued rwts: total=1390635,1388833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:11.805 00:21:11.805 Run status group 0 (all jobs): 00:21:11.805 READ: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=5432MiB (5696MB), run=60002-60002msec 00:21:11.805 WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=5425MiB (5689MB), run=60002-60002msec 00:21:11.805 00:21:11.805 Disk stats (read/write): 00:21:11.805 ublkb1: ios=1387989/1386129, merge=0/0, ticks=3646367/3654338, in_queue=7300705, util=99.95% 00:21:11.805 08:41:39 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.805 [2024-11-22 08:41:39.291933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:11.805 [2024-11-22 08:41:39.330059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:11.805 [2024-11-22 08:41:39.330297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:11.805 [2024-11-22 08:41:39.338011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:11.805 [2024-11-22 08:41:39.338177] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:11.805 [2024-11-22 08:41:39.338193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.805 08:41:39 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.805 [2024-11-22 08:41:39.354115] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:11.805 [2024-11-22 08:41:39.361976] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:11.805 [2024-11-22 08:41:39.362015] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.805 08:41:39 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:11.805 08:41:39 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:11.805 08:41:39 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75837 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75837 ']' 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75837 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75837 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.805 killing process with pid 75837 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75837' 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75837 00:21:11.805 08:41:39 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75837 00:21:11.805 [2024-11-22 08:41:40.999713] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:11.805 [2024-11-22 08:41:40.999782] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:11.805 00:21:11.805 real 1m6.018s 00:21:11.805 user 1m50.476s 00:21:11.805 sys 0m36.635s 00:21:11.805 08:41:42 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.805 08:41:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.805 ************************************ 00:21:11.805 END TEST ublk_recovery 00:21:11.805 ************************************ 00:21:11.805 08:41:42 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:11.805 08:41:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:11.806 08:41:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.806 08:41:42 -- common/autotest_common.sh@10 -- # set +x 00:21:11.806 08:41:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:11.806 08:41:42 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:11.806 08:41:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.806 08:41:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.806 08:41:42 -- common/autotest_common.sh@10 -- # set +x 00:21:11.806 ************************************ 00:21:11.806 START TEST ftl 00:21:11.806 ************************************ 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:11.806 * Looking for test storage... 00:21:11.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.806 08:41:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.806 08:41:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.806 08:41:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.806 08:41:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.806 08:41:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.806 08:41:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:11.806 08:41:42 ftl -- scripts/common.sh@345 -- # : 1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.806 08:41:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.806 08:41:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@353 -- # local d=1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.806 08:41:42 ftl -- scripts/common.sh@355 -- # echo 1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.806 08:41:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@353 -- # local d=2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.806 08:41:42 ftl -- scripts/common.sh@355 -- # echo 2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.806 08:41:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.806 08:41:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.806 08:41:42 ftl -- scripts/common.sh@368 -- # return 0 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:11.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.806 --rc genhtml_branch_coverage=1 00:21:11.806 --rc genhtml_function_coverage=1 00:21:11.806 --rc genhtml_legend=1 00:21:11.806 --rc geninfo_all_blocks=1 00:21:11.806 --rc geninfo_unexecuted_blocks=1 00:21:11.806 00:21:11.806 ' 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:11.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.806 --rc genhtml_branch_coverage=1 00:21:11.806 --rc genhtml_function_coverage=1 00:21:11.806 --rc genhtml_legend=1 00:21:11.806 --rc geninfo_all_blocks=1 00:21:11.806 --rc geninfo_unexecuted_blocks=1 00:21:11.806 00:21:11.806 ' 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:11.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.806 --rc genhtml_branch_coverage=1 00:21:11.806 --rc genhtml_function_coverage=1 00:21:11.806 --rc genhtml_legend=1 00:21:11.806 --rc geninfo_all_blocks=1 00:21:11.806 --rc geninfo_unexecuted_blocks=1 00:21:11.806 00:21:11.806 ' 00:21:11.806 08:41:42 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:11.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.806 --rc genhtml_branch_coverage=1 00:21:11.806 --rc genhtml_function_coverage=1 00:21:11.806 --rc genhtml_legend=1 00:21:11.806 --rc geninfo_all_blocks=1 00:21:11.806 --rc geninfo_unexecuted_blocks=1 00:21:11.806 00:21:11.806 ' 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:11.806 08:41:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:11.806 08:41:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:11.806 08:41:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:11.806 08:41:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:11.806 08:41:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:11.806 08:41:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.806 08:41:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:11.806 08:41:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:11.806 08:41:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:11.806 08:41:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:11.806 08:41:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:11.806 08:41:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:11.806 08:41:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:11.806 08:41:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:11.806 08:41:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:11.806 08:41:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:11.806 08:41:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:11.806 08:41:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:11.806 08:41:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:11.806 08:41:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:11.806 08:41:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:11.806 08:41:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:11.806 08:41:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:11.806 08:41:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.806 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.806 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.806 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.806 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.806 08:41:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:11.806 08:41:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76643 00:21:11.806 08:41:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76643 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 76643 ']' 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.806 08:41:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:11.806 [2024-11-22 08:41:43.715290] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:21:11.806 [2024-11-22 08:41:43.715430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76643 ] 00:21:11.806 [2024-11-22 08:41:43.895977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.806 [2024-11-22 08:41:44.003058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.806 08:41:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.806 08:41:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:11.806 08:41:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:11.807 08:41:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:11.807 08:41:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:11.807 08:41:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@50 -- # break 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@63 -- # break 00:21:11.807 08:41:46 ftl -- ftl/ftl.sh@66 -- # killprocess 76643 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@958 -- # kill -0 76643 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@959 -- # uname 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76643 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.807 killing process with pid 76643 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76643' 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@973 -- # kill 76643 00:21:11.807 08:41:46 ftl -- common/autotest_common.sh@978 -- # wait 76643 00:21:14.345 08:41:48 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:14.345 08:41:48 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:14.345 08:41:48 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:14.345 08:41:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.345 08:41:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:14.345 ************************************ 00:21:14.345 START TEST ftl_fio_basic 00:21:14.345 ************************************ 00:21:14.345 08:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:14.345 * Looking for test storage... 00:21:14.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:14.345 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.346 --rc genhtml_branch_coverage=1 00:21:14.346 --rc genhtml_function_coverage=1 00:21:14.346 --rc genhtml_legend=1 00:21:14.346 --rc geninfo_all_blocks=1 00:21:14.346 --rc geninfo_unexecuted_blocks=1 00:21:14.346 00:21:14.346 ' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.346 --rc genhtml_branch_coverage=1 00:21:14.346 --rc genhtml_function_coverage=1 00:21:14.346 --rc genhtml_legend=1 00:21:14.346 --rc geninfo_all_blocks=1 00:21:14.346 --rc geninfo_unexecuted_blocks=1 00:21:14.346 00:21:14.346 ' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.346 --rc genhtml_branch_coverage=1 00:21:14.346 --rc genhtml_function_coverage=1 00:21:14.346 --rc genhtml_legend=1 00:21:14.346 --rc geninfo_all_blocks=1 00:21:14.346 --rc geninfo_unexecuted_blocks=1 00:21:14.346 00:21:14.346 ' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.346 --rc genhtml_branch_coverage=1 00:21:14.346 --rc genhtml_function_coverage=1 00:21:14.346 --rc genhtml_legend=1 00:21:14.346 --rc geninfo_all_blocks=1 00:21:14.346 --rc geninfo_unexecuted_blocks=1 00:21:14.346 00:21:14.346 ' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76786 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76786 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76786 ']' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.346 08:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:14.346 [2024-11-22 08:41:49.306414] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:21:14.346 [2024-11-22 08:41:49.306556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76786 ] 00:21:14.604 [2024-11-22 08:41:49.488912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.604 [2024-11-22 08:41:49.604999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.604 [2024-11-22 08:41:49.605102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.604 [2024-11-22 08:41:49.605136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:15.542 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:15.802 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:16.062 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.062 { 00:21:16.062 "name": "nvme0n1", 00:21:16.062 "aliases": [ 00:21:16.062 "a868d349-7f92-45d4-a7c6-cbe85edaf9f1" 00:21:16.062 ], 00:21:16.062 "product_name": "NVMe disk", 00:21:16.062 "block_size": 4096, 00:21:16.062 "num_blocks": 1310720, 00:21:16.062 "uuid": "a868d349-7f92-45d4-a7c6-cbe85edaf9f1", 00:21:16.062 "numa_id": -1, 00:21:16.062 "assigned_rate_limits": { 00:21:16.062 "rw_ios_per_sec": 0, 00:21:16.062 "rw_mbytes_per_sec": 0, 00:21:16.062 "r_mbytes_per_sec": 0, 00:21:16.062 "w_mbytes_per_sec": 0 00:21:16.062 }, 00:21:16.062 "claimed": false, 00:21:16.062 "zoned": false, 00:21:16.062 "supported_io_types": { 00:21:16.062 "read": true, 00:21:16.062 "write": true, 00:21:16.062 "unmap": true, 00:21:16.062 "flush": true, 00:21:16.062 "reset": true, 00:21:16.062 "nvme_admin": true, 00:21:16.062 "nvme_io": true, 00:21:16.062 "nvme_io_md": false, 00:21:16.062 "write_zeroes": true, 00:21:16.062 "zcopy": false, 00:21:16.062 "get_zone_info": false, 00:21:16.062 "zone_management": false, 00:21:16.062 "zone_append": false, 00:21:16.062 "compare": true, 00:21:16.062 "compare_and_write": false, 00:21:16.062 "abort": true, 00:21:16.062 "seek_hole": false, 00:21:16.062 "seek_data": false, 00:21:16.062 "copy": true, 00:21:16.062 "nvme_iov_md": false 00:21:16.062 }, 00:21:16.062 "driver_specific": { 00:21:16.062 "nvme": [ 00:21:16.062 { 00:21:16.062 "pci_address": "0000:00:11.0", 00:21:16.062 "trid": { 00:21:16.062 "trtype": "PCIe", 00:21:16.062 "traddr": "0000:00:11.0" 00:21:16.062 }, 00:21:16.062 "ctrlr_data": { 00:21:16.062 "cntlid": 0, 00:21:16.062 "vendor_id": "0x1b36", 00:21:16.062 "model_number": "QEMU NVMe Ctrl", 00:21:16.062 "serial_number": "12341", 00:21:16.062 "firmware_revision": "8.0.0", 00:21:16.062 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:16.062 "oacs": { 00:21:16.062 "security": 0, 00:21:16.062 "format": 1, 00:21:16.062 "firmware": 0, 00:21:16.062 "ns_manage": 1 00:21:16.062 }, 00:21:16.062 "multi_ctrlr": false, 00:21:16.062 "ana_reporting": false 00:21:16.062 }, 00:21:16.062 "vs": { 00:21:16.062 "nvme_version": "1.4" 00:21:16.062 }, 00:21:16.062 "ns_data": { 00:21:16.062 "id": 1, 00:21:16.062 "can_share": false 00:21:16.062 } 00:21:16.062 } 00:21:16.062 ], 00:21:16.062 "mp_policy": "active_passive" 00:21:16.062 } 00:21:16.062 } 00:21:16.062 ]' 00:21:16.062 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.062 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.062 08:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:16.062 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:16.322 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:16.322 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7f7d2af8-ffd3-44c6-9baf-2e463d41d3df 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7f7d2af8-ffd3-44c6-9baf-2e463d41d3df 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:16.582 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.841 { 00:21:16.841 "name": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:16.841 "aliases": [ 00:21:16.841 "lvs/nvme0n1p0" 00:21:16.841 ], 00:21:16.841 "product_name": "Logical Volume", 00:21:16.841 "block_size": 4096, 00:21:16.841 "num_blocks": 26476544, 00:21:16.841 "uuid": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:16.841 "assigned_rate_limits": { 00:21:16.841 "rw_ios_per_sec": 0, 00:21:16.841 "rw_mbytes_per_sec": 0, 00:21:16.841 "r_mbytes_per_sec": 0, 00:21:16.841 "w_mbytes_per_sec": 0 00:21:16.841 }, 00:21:16.841 "claimed": false, 00:21:16.841 "zoned": false, 00:21:16.841 "supported_io_types": { 00:21:16.841 "read": true, 00:21:16.841 "write": true, 00:21:16.841 "unmap": true, 00:21:16.841 "flush": false, 00:21:16.841 "reset": true, 00:21:16.841 "nvme_admin": false, 00:21:16.841 "nvme_io": false, 00:21:16.841 "nvme_io_md": false, 00:21:16.841 "write_zeroes": true, 00:21:16.841 "zcopy": false, 00:21:16.841 "get_zone_info": false, 00:21:16.841 "zone_management": false, 00:21:16.841 "zone_append": false, 00:21:16.841 "compare": false, 00:21:16.841 "compare_and_write": false, 00:21:16.841 "abort": false, 00:21:16.841 "seek_hole": true, 00:21:16.841 "seek_data": true, 00:21:16.841 "copy": false, 00:21:16.841 "nvme_iov_md": false 00:21:16.841 }, 00:21:16.841 "driver_specific": { 00:21:16.841 "lvol": { 00:21:16.841 "lvol_store_uuid": "7f7d2af8-ffd3-44c6-9baf-2e463d41d3df", 00:21:16.841 "base_bdev": "nvme0n1", 00:21:16.841 "thin_provision": true, 00:21:16.841 "num_allocated_clusters": 0, 00:21:16.841 "snapshot": false, 00:21:16.841 "clone": false, 00:21:16.841 "esnap_clone": false 00:21:16.841 } 00:21:16.841 } 00:21:16.841 } 00:21:16.841 ]' 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.841 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:17.100 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:17.100 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:17.100 08:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:17.100 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:17.100 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:17.101 08:41:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.360 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:17.360 { 00:21:17.360 "name": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:17.360 "aliases": [ 00:21:17.360 "lvs/nvme0n1p0" 00:21:17.360 ], 00:21:17.360 "product_name": "Logical Volume", 00:21:17.360 "block_size": 4096, 00:21:17.360 "num_blocks": 26476544, 00:21:17.360 "uuid": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:17.360 "assigned_rate_limits": { 00:21:17.360 "rw_ios_per_sec": 0, 00:21:17.360 "rw_mbytes_per_sec": 0, 00:21:17.360 "r_mbytes_per_sec": 0, 00:21:17.360 "w_mbytes_per_sec": 0 00:21:17.360 }, 00:21:17.360 "claimed": false, 00:21:17.360 "zoned": false, 00:21:17.360 "supported_io_types": { 00:21:17.360 "read": true, 00:21:17.360 "write": true, 00:21:17.360 "unmap": true, 00:21:17.360 "flush": false, 00:21:17.360 "reset": true, 00:21:17.360 "nvme_admin": false, 00:21:17.360 "nvme_io": false, 00:21:17.361 "nvme_io_md": false, 00:21:17.361 "write_zeroes": true, 00:21:17.361 "zcopy": false, 00:21:17.361 "get_zone_info": false, 00:21:17.361 "zone_management": false, 00:21:17.361 "zone_append": false, 00:21:17.361 "compare": false, 00:21:17.361 "compare_and_write": false, 00:21:17.361 "abort": false, 00:21:17.361 "seek_hole": true, 00:21:17.361 "seek_data": true, 00:21:17.361 "copy": false, 00:21:17.361 "nvme_iov_md": false 00:21:17.361 }, 00:21:17.361 "driver_specific": { 00:21:17.361 "lvol": { 00:21:17.361 "lvol_store_uuid": "7f7d2af8-ffd3-44c6-9baf-2e463d41d3df", 00:21:17.361 "base_bdev": "nvme0n1", 00:21:17.361 "thin_provision": true, 00:21:17.361 "num_allocated_clusters": 0, 00:21:17.361 "snapshot": false, 00:21:17.361 "clone": false, 00:21:17.361 "esnap_clone": false 00:21:17.361 } 00:21:17.361 } 00:21:17.361 } 00:21:17.361 ]' 00:21:17.361 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:17.621 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:17.621 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 851a3538-62bc-4c5e-b75f-11110e3afbc3 00:21:17.880 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:17.880 { 00:21:17.880 "name": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:17.880 "aliases": [ 00:21:17.880 "lvs/nvme0n1p0" 00:21:17.880 ], 00:21:17.880 "product_name": "Logical Volume", 00:21:17.880 "block_size": 4096, 00:21:17.880 "num_blocks": 26476544, 00:21:17.880 "uuid": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:17.880 "assigned_rate_limits": { 00:21:17.880 "rw_ios_per_sec": 0, 00:21:17.880 "rw_mbytes_per_sec": 0, 00:21:17.880 "r_mbytes_per_sec": 0, 00:21:17.880 "w_mbytes_per_sec": 0 00:21:17.880 }, 00:21:17.880 "claimed": false, 00:21:17.880 "zoned": false, 00:21:17.880 "supported_io_types": { 00:21:17.880 "read": true, 00:21:17.880 "write": true, 00:21:17.880 "unmap": true, 00:21:17.880 "flush": false, 00:21:17.881 "reset": true, 00:21:17.881 "nvme_admin": false, 00:21:17.881 "nvme_io": false, 00:21:17.881 "nvme_io_md": false, 00:21:17.881 "write_zeroes": true, 00:21:17.881 "zcopy": false, 00:21:17.881 "get_zone_info": false, 00:21:17.881 "zone_management": false, 00:21:17.881 "zone_append": false, 00:21:17.881 "compare": false, 00:21:17.881 "compare_and_write": false, 00:21:17.881 "abort": false, 00:21:17.881 "seek_hole": true, 00:21:17.881 "seek_data": true, 00:21:17.881 "copy": false, 00:21:17.881 "nvme_iov_md": false 00:21:17.881 }, 00:21:17.881 "driver_specific": { 00:21:17.881 "lvol": { 00:21:17.881 "lvol_store_uuid": "7f7d2af8-ffd3-44c6-9baf-2e463d41d3df", 00:21:17.881 "base_bdev": "nvme0n1", 00:21:17.881 "thin_provision": true, 00:21:17.881 "num_allocated_clusters": 0, 00:21:17.881 "snapshot": false, 00:21:17.881 "clone": false, 00:21:17.881 "esnap_clone": false 00:21:17.881 } 00:21:17.881 } 00:21:17.881 } 00:21:17.881 ]' 00:21:17.881 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:17.881 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:17.881 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:18.142 08:41:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 851a3538-62bc-4c5e-b75f-11110e3afbc3 -c nvc0n1p0 --l2p_dram_limit 60 00:21:18.142 [2024-11-22 08:41:53.146301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.146816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.142 [2024-11-22 08:41:53.146902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:18.142 [2024-11-22 08:41:53.146991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.147141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.147201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.142 [2024-11-22 08:41:53.147259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:18.142 [2024-11-22 08:41:53.147310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.147394] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.142 [2024-11-22 08:41:53.148419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.142 [2024-11-22 08:41:53.148620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.148695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.142 [2024-11-22 08:41:53.148754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:21:18.142 [2024-11-22 08:41:53.148809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.149001] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0bcfc01e-9717-47c1-b6db-52d815187a7b 00:21:18.142 [2024-11-22 08:41:53.150619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.150796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:18.142 [2024-11-22 08:41:53.150866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:18.142 [2024-11-22 08:41:53.150883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.158612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.158796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.142 [2024-11-22 08:41:53.158996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.633 ms 00:21:18.142 [2024-11-22 08:41:53.159149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.159345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.159447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.142 [2024-11-22 08:41:53.159594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:18.142 [2024-11-22 08:41:53.159742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.159886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.159993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.142 [2024-11-22 08:41:53.160136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:18.142 [2024-11-22 08:41:53.160287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.160398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.142 [2024-11-22 08:41:53.165677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.165862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.142 [2024-11-22 08:41:53.166026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:21:18.142 [2024-11-22 08:41:53.166178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.166374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.142 [2024-11-22 08:41:53.166465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.142 [2024-11-22 08:41:53.166631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:18.142 [2024-11-22 08:41:53.166728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.142 [2024-11-22 08:41:53.166859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:18.142 [2024-11-22 08:41:53.167121] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:18.142 [2024-11-22 08:41:53.167374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.142 [2024-11-22 08:41:53.167547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:18.142 [2024-11-22 08:41:53.167673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.142 [2024-11-22 08:41:53.167814] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.142 [2024-11-22 08:41:53.167981] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:18.142 [2024-11-22 08:41:53.168068] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.142 [2024-11-22 08:41:53.168271] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:18.142 [2024-11-22 08:41:53.168353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:18.142 [2024-11-22 08:41:53.168502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.143 [2024-11-22 08:41:53.168579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.143 [2024-11-22 08:41:53.168724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:21:18.143 [2024-11-22 08:41:53.168811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.143 [2024-11-22 08:41:53.169039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.143 [2024-11-22 08:41:53.169130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.143 [2024-11-22 08:41:53.169317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:18.143 [2024-11-22 08:41:53.169407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.143 [2024-11-22 08:41:53.169738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.143 [2024-11-22 08:41:53.169887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.143 [2024-11-22 08:41:53.170004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.143 [2024-11-22 08:41:53.170198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.170291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.143 [2024-11-22 08:41:53.170543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.170651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:18.143 [2024-11-22 08:41:53.170911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.143 [2024-11-22 08:41:53.171025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:18.143 [2024-11-22 08:41:53.171183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.143 [2024-11-22 08:41:53.171241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.143 [2024-11-22 08:41:53.171316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:18.143 [2024-11-22 08:41:53.171378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.143 [2024-11-22 08:41:53.171442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.143 [2024-11-22 08:41:53.171502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:18.143 [2024-11-22 08:41:53.171660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.171741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.143 [2024-11-22 08:41:53.171793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:18.143 [2024-11-22 08:41:53.171850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.171888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.143 [2024-11-22 08:41:53.171937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:18.143 [2024-11-22 08:41:53.172083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.143 [2024-11-22 08:41:53.172164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.143 [2024-11-22 08:41:53.172217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:18.143 [2024-11-22 08:41:53.172270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.143 [2024-11-22 08:41:53.172313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.143 [2024-11-22 08:41:53.172446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:18.143 [2024-11-22 08:41:53.172519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.143 [2024-11-22 08:41:53.172567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.143 [2024-11-22 08:41:53.172629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:18.143 [2024-11-22 08:41:53.172678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.143 [2024-11-22 08:41:53.172721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.143 [2024-11-22 08:41:53.172859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:18.143 [2024-11-22 08:41:53.172928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.143 [2024-11-22 08:41:53.172998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.143 [2024-11-22 08:41:53.173070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:18.143 [2024-11-22 08:41:53.173209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.143 [2024-11-22 08:41:53.173276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:18.143 [2024-11-22 08:41:53.173328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:18.143 [2024-11-22 08:41:53.173385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.173430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:18.143 [2024-11-22 08:41:53.173560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:18.143 [2024-11-22 08:41:53.173638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.173688] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.143 [2024-11-22 08:41:53.173744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.143 [2024-11-22 08:41:53.173889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.143 [2024-11-22 08:41:53.173976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.143 [2024-11-22 08:41:53.174024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.143 [2024-11-22 08:41:53.174081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.143 [2024-11-22 08:41:53.174210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.143 [2024-11-22 08:41:53.174288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.143 [2024-11-22 08:41:53.174334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.143 [2024-11-22 08:41:53.174387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.143 [2024-11-22 08:41:53.174447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.143 [2024-11-22 08:41:53.174572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.174662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:18.143 [2024-11-22 08:41:53.174720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:18.143 [2024-11-22 08:41:53.174857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:18.143 [2024-11-22 08:41:53.174939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:18.143 [2024-11-22 08:41:53.175010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:18.143 [2024-11-22 08:41:53.175064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:18.143 [2024-11-22 08:41:53.175117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:18.143 [2024-11-22 08:41:53.175265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:18.143 [2024-11-22 08:41:53.175338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:18.143 [2024-11-22 08:41:53.175400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:18.143 [2024-11-22 08:41:53.175772] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.143 [2024-11-22 08:41:53.175830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.143 [2024-11-22 08:41:53.175973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.143 [2024-11-22 08:41:53.176116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.143 [2024-11-22 08:41:53.176203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.143 [2024-11-22 08:41:53.176249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.143 [2024-11-22 08:41:53.176296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.143 [2024-11-22 08:41:53.176343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.581 ms 00:21:18.143 [2024-11-22 08:41:53.176474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.143 [2024-11-22 08:41:53.176638] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:18.143 [2024-11-22 08:41:53.176781] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:22.338 [2024-11-22 08:41:56.726103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.726632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:22.338 [2024-11-22 08:41:56.726842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3555.227 ms 00:21:22.338 [2024-11-22 08:41:56.726938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.764974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.765227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.338 [2024-11-22 08:41:56.765408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.689 ms 00:21:22.338 [2024-11-22 08:41:56.765574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.765791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.765925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.338 [2024-11-22 08:41:56.766112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:22.338 [2024-11-22 08:41:56.766284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.823018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.823280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.338 [2024-11-22 08:41:56.823483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.682 ms 00:21:22.338 [2024-11-22 08:41:56.823717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.823864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.824040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.338 [2024-11-22 08:41:56.824223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:22.338 [2024-11-22 08:41:56.824408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.825067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.825275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.338 [2024-11-22 08:41:56.825450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:21:22.338 [2024-11-22 08:41:56.825623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.825947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.826143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.338 [2024-11-22 08:41:56.826313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:22.338 [2024-11-22 08:41:56.826480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.848729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.848944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.338 [2024-11-22 08:41:56.849178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.135 ms 00:21:22.338 [2024-11-22 08:41:56.849278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.861825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:22.338 [2024-11-22 08:41:56.878160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.878464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.338 [2024-11-22 08:41:56.878677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.699 ms 00:21:22.338 [2024-11-22 08:41:56.878871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.973771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.974085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:22.338 [2024-11-22 08:41:56.974252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.913 ms 00:21:22.338 [2024-11-22 08:41:56.974427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:56.974670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:56.974812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.338 [2024-11-22 08:41:56.974918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:22.338 [2024-11-22 08:41:56.975096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:57.011238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:57.011490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:22.338 [2024-11-22 08:41:57.011666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.084 ms 00:21:22.338 [2024-11-22 08:41:57.011757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:57.046842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:57.047075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:22.338 [2024-11-22 08:41:57.047260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.957 ms 00:21:22.338 [2024-11-22 08:41:57.047507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.338 [2024-11-22 08:41:57.048355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.338 [2024-11-22 08:41:57.048518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.338 [2024-11-22 08:41:57.048589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:21:22.338 [2024-11-22 08:41:57.048719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.150474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.150709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:22.339 [2024-11-22 08:41:57.150888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.792 ms 00:21:22.339 [2024-11-22 08:41:57.151063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.188928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.189155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:22.339 [2024-11-22 08:41:57.189353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.746 ms 00:21:22.339 [2024-11-22 08:41:57.189517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.225632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.225839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:22.339 [2024-11-22 08:41:57.226017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.045 ms 00:21:22.339 [2024-11-22 08:41:57.226113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.262511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.262755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.339 [2024-11-22 08:41:57.262919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.257 ms 00:21:22.339 [2024-11-22 08:41:57.263040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.263193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.263253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.339 [2024-11-22 08:41:57.263319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.339 [2024-11-22 08:41:57.263335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.263472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.339 [2024-11-22 08:41:57.263486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.339 [2024-11-22 08:41:57.263499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:22.339 [2024-11-22 08:41:57.263509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.339 [2024-11-22 08:41:57.264636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4124.598 ms, result 0 00:21:22.339 { 00:21:22.339 "name": "ftl0", 00:21:22.339 "uuid": "0bcfc01e-9717-47c1-b6db-52d815187a7b" 00:21:22.339 } 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:22.339 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:22.598 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:22.858 [ 00:21:22.858 { 00:21:22.858 "name": "ftl0", 00:21:22.858 "aliases": [ 00:21:22.858 "0bcfc01e-9717-47c1-b6db-52d815187a7b" 00:21:22.858 ], 00:21:22.858 "product_name": "FTL disk", 00:21:22.858 "block_size": 4096, 00:21:22.858 "num_blocks": 20971520, 00:21:22.858 "uuid": "0bcfc01e-9717-47c1-b6db-52d815187a7b", 00:21:22.858 "assigned_rate_limits": { 00:21:22.858 "rw_ios_per_sec": 0, 00:21:22.858 "rw_mbytes_per_sec": 0, 00:21:22.858 "r_mbytes_per_sec": 0, 00:21:22.858 "w_mbytes_per_sec": 0 00:21:22.858 }, 00:21:22.858 "claimed": false, 00:21:22.858 "zoned": false, 00:21:22.858 "supported_io_types": { 00:21:22.858 "read": true, 00:21:22.858 "write": true, 00:21:22.858 "unmap": true, 00:21:22.858 "flush": true, 00:21:22.858 "reset": false, 00:21:22.858 "nvme_admin": false, 00:21:22.858 "nvme_io": false, 00:21:22.858 "nvme_io_md": false, 00:21:22.858 "write_zeroes": true, 00:21:22.858 "zcopy": false, 00:21:22.858 "get_zone_info": false, 00:21:22.858 "zone_management": false, 00:21:22.858 "zone_append": false, 00:21:22.858 "compare": false, 00:21:22.858 "compare_and_write": false, 00:21:22.858 "abort": false, 00:21:22.858 "seek_hole": false, 00:21:22.858 "seek_data": false, 00:21:22.858 "copy": false, 00:21:22.858 "nvme_iov_md": false 00:21:22.858 }, 00:21:22.858 "driver_specific": { 00:21:22.858 "ftl": { 00:21:22.858 "base_bdev": "851a3538-62bc-4c5e-b75f-11110e3afbc3", 00:21:22.858 "cache": "nvc0n1p0" 00:21:22.858 } 00:21:22.858 } 00:21:22.858 } 00:21:22.858 ] 00:21:22.858 08:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:22.858 08:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:22.858 08:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:22.858 08:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:22.858 08:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:23.118 [2024-11-22 08:41:58.103733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.103944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.118 [2024-11-22 08:41:58.104089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.118 [2024-11-22 08:41:58.104112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.104169] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:23.118 [2024-11-22 08:41:58.108514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.108547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.118 [2024-11-22 08:41:58.108563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:21:23.118 [2024-11-22 08:41:58.108573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.109024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.109038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.118 [2024-11-22 08:41:58.109052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:21:23.118 [2024-11-22 08:41:58.109062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.111580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.111607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:23.118 [2024-11-22 08:41:58.111621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:21:23.118 [2024-11-22 08:41:58.111630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.116694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.116728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:23.118 [2024-11-22 08:41:58.116742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.037 ms 00:21:23.118 [2024-11-22 08:41:58.116768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.154202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.154353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:23.118 [2024-11-22 08:41:58.154465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.402 ms 00:21:23.118 [2024-11-22 08:41:58.154481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.177191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.177227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:23.118 [2024-11-22 08:41:58.177245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.675 ms 00:21:23.118 [2024-11-22 08:41:58.177274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.118 [2024-11-22 08:41:58.177470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.118 [2024-11-22 08:41:58.177484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:23.118 [2024-11-22 08:41:58.177498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:23.118 [2024-11-22 08:41:58.177508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.382 [2024-11-22 08:41:58.214591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.382 [2024-11-22 08:41:58.214627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:23.382 [2024-11-22 08:41:58.214643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.107 ms 00:21:23.382 [2024-11-22 08:41:58.214653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.382 [2024-11-22 08:41:58.250872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.382 [2024-11-22 08:41:58.250908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:23.382 [2024-11-22 08:41:58.250924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.227 ms 00:21:23.382 [2024-11-22 08:41:58.250934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.382 [2024-11-22 08:41:58.286419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.382 [2024-11-22 08:41:58.286455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.382 [2024-11-22 08:41:58.286470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.476 ms 00:21:23.382 [2024-11-22 08:41:58.286480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.382 [2024-11-22 08:41:58.321831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.382 [2024-11-22 08:41:58.321864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.382 [2024-11-22 08:41:58.321880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.280 ms 00:21:23.382 [2024-11-22 08:41:58.321905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.382 [2024-11-22 08:41:58.321970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.382 [2024-11-22 08:41:58.321988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.382 [2024-11-22 08:41:58.322341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.322992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.383 [2024-11-22 08:41:58.323234] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.383 [2024-11-22 08:41:58.323247] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bcfc01e-9717-47c1-b6db-52d815187a7b 00:21:23.383 [2024-11-22 08:41:58.323258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:23.383 [2024-11-22 08:41:58.323272] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:23.383 [2024-11-22 08:41:58.323282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:23.383 [2024-11-22 08:41:58.323300] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:23.383 [2024-11-22 08:41:58.323310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.383 [2024-11-22 08:41:58.323322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.383 [2024-11-22 08:41:58.323332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.383 [2024-11-22 08:41:58.323344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.383 [2024-11-22 08:41:58.323352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.383 [2024-11-22 08:41:58.323364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.383 [2024-11-22 08:41:58.323374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.383 [2024-11-22 08:41:58.323387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:21:23.383 [2024-11-22 08:41:58.323397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.383 [2024-11-22 08:41:58.343617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.383 [2024-11-22 08:41:58.343655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.383 [2024-11-22 08:41:58.343669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.188 ms 00:21:23.383 [2024-11-22 08:41:58.343695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.383 [2024-11-22 08:41:58.344253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.383 [2024-11-22 08:41:58.344266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.383 [2024-11-22 08:41:58.344279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:21:23.383 [2024-11-22 08:41:58.344289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.383 [2024-11-22 08:41:58.413457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.383 [2024-11-22 08:41:58.413502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.384 [2024-11-22 08:41:58.413517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.384 [2024-11-22 08:41:58.413544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.384 [2024-11-22 08:41:58.413617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.384 [2024-11-22 08:41:58.413629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.384 [2024-11-22 08:41:58.413643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.384 [2024-11-22 08:41:58.413653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.384 [2024-11-22 08:41:58.413776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.384 [2024-11-22 08:41:58.413791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.384 [2024-11-22 08:41:58.413807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.384 [2024-11-22 08:41:58.413817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.384 [2024-11-22 08:41:58.413848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.384 [2024-11-22 08:41:58.413859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.384 [2024-11-22 08:41:58.413872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.384 [2024-11-22 08:41:58.413883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.545476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.545755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.643 [2024-11-22 08:41:58.545783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.545794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.645726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.645784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.643 [2024-11-22 08:41:58.645803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.645814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.645930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.645943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.643 [2024-11-22 08:41:58.645974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.645989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.646098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.643 [2024-11-22 08:41:58.646111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.646121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.646274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.643 [2024-11-22 08:41:58.646288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.646298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.646374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.643 [2024-11-22 08:41:58.646388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.646398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.646456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.643 [2024-11-22 08:41:58.646470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.646479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.643 [2024-11-22 08:41:58.646554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.643 [2024-11-22 08:41:58.646567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.643 [2024-11-22 08:41:58.646576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.643 [2024-11-22 08:41:58.646750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.867 ms, result 0 00:21:23.643 true 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76786 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76786 ']' 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76786 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.643 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76786 00:21:23.902 killing process with pid 76786 00:21:23.902 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.902 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.902 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76786' 00:21:23.902 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76786 00:21:23.902 08:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76786 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:29.288 08:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:29.288 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:29.288 fio-3.35 00:21:29.288 Starting 1 thread 00:21:34.559 00:21:34.559 test: (groupid=0, jobs=1): err= 0: pid=76999: Fri Nov 22 08:42:08 2024 00:21:34.559 read: IOPS=943, BW=62.7MiB/s (65.7MB/s)(255MiB/4061msec) 00:21:34.559 slat (usec): min=4, max=2428, avg= 6.49, stdev=39.19 00:21:34.559 clat (usec): min=315, max=990, avg=481.57, stdev=53.16 00:21:34.559 lat (usec): min=320, max=3021, avg=488.06, stdev=67.28 00:21:34.559 clat percentiles (usec): 00:21:34.559 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 449], 00:21:34.559 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 478], 60.00th=[ 515], 00:21:34.559 | 70.00th=[ 519], 80.00th=[ 523], 90.00th=[ 529], 95.00th=[ 537], 00:21:34.560 | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 922], 99.95th=[ 955], 00:21:34.560 | 99.99th=[ 988] 00:21:34.560 write: IOPS=950, BW=63.1MiB/s (66.2MB/s)(256MiB/4057msec); 0 zone resets 00:21:34.560 slat (usec): min=15, max=106, avg=19.25, stdev= 4.40 00:21:34.560 clat (usec): min=352, max=4449, avg=538.09, stdev=94.61 00:21:34.560 lat (usec): min=375, max=4477, avg=557.33, stdev=95.02 00:21:34.560 clat percentiles (usec): 00:21:34.560 | 1.00th=[ 404], 5.00th=[ 437], 10.00th=[ 469], 20.00th=[ 478], 00:21:34.560 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 537], 60.00th=[ 545], 00:21:34.560 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 619], 00:21:34.560 | 99.00th=[ 857], 99.50th=[ 914], 99.90th=[ 979], 99.95th=[ 1057], 00:21:34.560 | 99.99th=[ 4424] 00:21:34.560 bw ( KiB/s): min=61336, max=66096, per=99.93%, avg=64583.00, stdev=1460.14, samples=8 00:21:34.560 iops : min= 902, max= 972, avg=949.75, stdev=21.47, samples=8 00:21:34.560 lat (usec) : 500=41.71%, 750=57.24%, 1000=1.03% 00:21:34.560 lat (msec) : 2=0.01%, 10=0.01% 00:21:34.560 cpu : usr=99.26%, sys=0.12%, ctx=10, majf=0, minf=1169 00:21:34.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.560 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:34.560 00:21:34.560 Run status group 0 (all jobs): 00:21:34.560 READ: bw=62.7MiB/s (65.7MB/s), 62.7MiB/s-62.7MiB/s (65.7MB/s-65.7MB/s), io=255MiB (267MB), run=4061-4061msec 00:21:34.560 WRITE: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=256MiB (269MB), run=4057-4057msec 00:21:35.939 ----------------------------------------------------- 00:21:35.939 Suppressions used: 00:21:35.939 count bytes template 00:21:35.939 1 5 /usr/src/fio/parse.c 00:21:35.939 1 8 libtcmalloc_minimal.so 00:21:35.939 1 904 libcrypto.so 00:21:35.939 ----------------------------------------------------- 00:21:35.939 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:35.939 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:35.940 08:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:36.200 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:36.200 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:36.200 fio-3.35 00:21:36.200 Starting 2 threads 00:22:02.781 00:22:02.781 first_half: (groupid=0, jobs=1): err= 0: pid=77102: Fri Nov 22 08:42:35 2024 00:22:02.781 read: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(256MiB/22748msec) 00:22:02.781 slat (nsec): min=3465, max=87638, avg=5913.44, stdev=2020.23 00:22:02.781 clat (usec): min=591, max=249959, avg=37721.18, stdev=22707.68 00:22:02.781 lat (usec): min=595, max=249966, avg=37727.09, stdev=22707.98 00:22:02.781 clat percentiles (msec): 00:22:02.781 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:22:02.781 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:22:02.781 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 72], 00:22:02.781 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 218], 00:22:02.781 | 99.99th=[ 245] 00:22:02.781 write: IOPS=2884, BW=11.3MiB/s (11.8MB/s)(256MiB/22718msec); 0 zone resets 00:22:02.781 slat (usec): min=4, max=400, avg= 7.08, stdev= 3.95 00:22:02.781 clat (usec): min=407, max=38989, avg=6714.58, stdev=6351.19 00:22:02.781 lat (usec): min=416, max=38995, avg=6721.66, stdev=6351.27 00:22:02.781 clat percentiles (usec): 00:22:02.781 | 1.00th=[ 1020], 5.00th=[ 1287], 10.00th=[ 1582], 20.00th=[ 2868], 00:22:02.781 | 30.00th=[ 3752], 40.00th=[ 4817], 50.00th=[ 5342], 60.00th=[ 6128], 00:22:02.781 | 70.00th=[ 6521], 80.00th=[ 7832], 90.00th=[12125], 95.00th=[18482], 00:22:02.781 | 99.00th=[35390], 99.50th=[36439], 99.90th=[37487], 99.95th=[38011], 00:22:02.781 | 99.99th=[38536] 00:22:02.781 bw ( KiB/s): min= 1352, max=41712, per=100.00%, avg=24785.14, stdev=13000.34, samples=21 00:22:02.781 iops : min= 338, max=10428, avg=6196.19, stdev=3250.21, samples=21 00:22:02.781 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.36% 00:22:02.781 lat (msec) : 2=6.73%, 4=9.21%, 10=27.37%, 20=5.32%, 50=47.59% 00:22:02.781 lat (msec) : 100=1.55%, 250=1.79% 00:22:02.781 cpu : usr=99.14%, sys=0.17%, ctx=35, majf=0, minf=5556 00:22:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.781 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.781 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.781 second_half: (groupid=0, jobs=1): err= 0: pid=77103: Fri Nov 22 08:42:35 2024 00:22:02.781 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(256MiB/22580msec) 00:22:02.781 slat (nsec): min=3418, max=34840, avg=5982.18, stdev=2069.18 00:22:02.781 clat (msec): min=9, max=192, avg=38.05, stdev=20.18 00:22:02.781 lat (msec): min=9, max=192, avg=38.06, stdev=20.18 00:22:02.781 clat percentiles (msec): 00:22:02.781 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:22:02.781 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:22:02.781 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 39], 95.00th=[ 67], 00:22:02.781 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 180], 00:22:02.781 | 99.99th=[ 188] 00:22:02.781 write: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(256MiB/20998msec); 0 zone resets 00:22:02.781 slat (usec): min=4, max=380, avg= 7.07, stdev= 4.89 00:22:02.781 clat (usec): min=471, max=41514, avg=6056.32, stdev=3647.38 00:22:02.781 lat (usec): min=476, max=41519, avg=6063.39, stdev=3647.54 00:22:02.781 clat percentiles (usec): 00:22:02.781 | 1.00th=[ 1188], 5.00th=[ 1844], 10.00th=[ 2442], 20.00th=[ 3490], 00:22:02.781 | 30.00th=[ 4490], 40.00th=[ 4948], 50.00th=[ 5407], 60.00th=[ 5997], 00:22:02.781 | 70.00th=[ 6325], 80.00th=[ 7439], 90.00th=[11076], 95.00th=[12387], 00:22:02.781 | 99.00th=[19006], 99.50th=[27395], 99.90th=[34866], 99.95th=[35914], 00:22:02.781 | 99.99th=[40109] 00:22:02.781 bw ( KiB/s): min= 1632, max=47584, per=100.00%, avg=27589.79, stdev=14866.90, samples=19 00:22:02.781 iops : min= 408, max=11896, avg=6897.42, stdev=3716.70, samples=19 00:22:02.781 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.18% 00:22:02.781 lat (msec) : 2=2.89%, 4=10.29%, 10=29.88%, 20=6.28%, 50=47.11% 00:22:02.781 lat (msec) : 100=1.64%, 250=1.67% 00:22:02.781 cpu : usr=99.19%, sys=0.20%, ctx=29, majf=0, minf=5555 00:22:02.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.781 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.781 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.781 00:22:02.781 Run status group 0 (all jobs): 00:22:02.781 READ: bw=22.5MiB/s (23.6MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.9MB/s), io=512MiB (536MB), run=22580-22748msec 00:22:02.781 WRITE: bw=22.5MiB/s (23.6MB/s), 11.3MiB/s-12.2MiB/s (11.8MB/s-12.8MB/s), io=512MiB (537MB), run=20998-22718msec 00:22:02.781 ----------------------------------------------------- 00:22:02.782 Suppressions used: 00:22:02.782 count bytes template 00:22:02.782 2 10 /usr/src/fio/parse.c 00:22:02.782 2 192 /usr/src/fio/iolog.c 00:22:02.782 1 8 libtcmalloc_minimal.so 00:22:02.782 1 904 libcrypto.so 00:22:02.782 ----------------------------------------------------- 00:22:02.782 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:02.782 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:03.041 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:03.041 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:03.041 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:03.041 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:03.041 08:42:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:03.041 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:03.041 fio-3.35 00:22:03.041 Starting 1 thread 00:22:17.927 00:22:17.927 test: (groupid=0, jobs=1): err= 0: pid=77405: Fri Nov 22 08:42:52 2024 00:22:17.927 read: IOPS=7942, BW=31.0MiB/s (32.5MB/s)(255MiB/8209msec) 00:22:17.927 slat (nsec): min=3297, max=70990, avg=5094.30, stdev=1919.96 00:22:17.927 clat (usec): min=650, max=38853, avg=16107.12, stdev=1061.69 00:22:17.927 lat (usec): min=654, max=38862, avg=16112.22, stdev=1061.90 00:22:17.927 clat percentiles (usec): 00:22:17.927 | 1.00th=[15008], 5.00th=[15270], 10.00th=[15401], 20.00th=[15664], 00:22:17.927 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16188], 00:22:17.927 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16581], 95.00th=[16909], 00:22:17.927 | 99.00th=[19530], 99.50th=[20317], 99.90th=[28967], 99.95th=[33817], 00:22:17.927 | 99.99th=[38011] 00:22:17.927 write: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(256MiB/4709msec); 0 zone resets 00:22:17.927 slat (usec): min=4, max=1250, avg= 7.44, stdev= 8.04 00:22:17.927 clat (usec): min=645, max=51817, avg=9150.72, stdev=10971.87 00:22:17.927 lat (usec): min=651, max=51826, avg=9158.16, stdev=10971.89 00:22:17.927 clat percentiles (usec): 00:22:17.927 | 1.00th=[ 922], 5.00th=[ 1090], 10.00th=[ 1221], 20.00th=[ 1418], 00:22:17.927 | 30.00th=[ 1582], 40.00th=[ 1893], 50.00th=[ 6128], 60.00th=[ 7177], 00:22:17.927 | 70.00th=[ 8356], 80.00th=[10159], 90.00th=[32900], 95.00th=[34341], 00:22:17.927 | 99.00th=[35914], 99.50th=[36439], 99.90th=[38536], 99.95th=[41157], 00:22:17.927 | 99.99th=[46924] 00:22:17.927 bw ( KiB/s): min=19352, max=71432, per=94.18%, avg=52428.80, stdev=13805.71, samples=10 00:22:17.927 iops : min= 4838, max=17858, avg=13107.20, stdev=3451.43, samples=10 00:22:17.927 lat (usec) : 750=0.03%, 1000=1.25% 00:22:17.927 lat (msec) : 2=19.24%, 4=0.65%, 10=18.58%, 20=51.96%, 50=8.29% 00:22:17.927 lat (msec) : 100=0.01% 00:22:17.927 cpu : usr=98.83%, sys=0.36%, ctx=21, majf=0, minf=5565 00:22:17.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:17.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.927 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.927 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.927 00:22:17.927 Run status group 0 (all jobs): 00:22:17.927 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8209-8209msec 00:22:17.927 WRITE: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=256MiB (268MB), run=4709-4709msec 00:22:19.306 ----------------------------------------------------- 00:22:19.306 Suppressions used: 00:22:19.306 count bytes template 00:22:19.306 1 5 /usr/src/fio/parse.c 00:22:19.306 2 192 /usr/src/fio/iolog.c 00:22:19.306 1 8 libtcmalloc_minimal.so 00:22:19.306 1 904 libcrypto.so 00:22:19.306 ----------------------------------------------------- 00:22:19.306 00:22:19.306 08:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:19.306 08:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.306 08:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:19.566 Remove shared memory files 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57756 /dev/shm/spdk_tgt_trace.pid75691 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:19.566 ************************************ 00:22:19.566 END TEST ftl_fio_basic 00:22:19.566 ************************************ 00:22:19.566 00:22:19.566 real 1m5.491s 00:22:19.566 user 2m17.674s 00:22:19.566 sys 0m3.793s 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.566 08:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.566 08:42:54 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:19.566 08:42:54 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:19.566 08:42:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.566 08:42:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:19.566 ************************************ 00:22:19.566 START TEST ftl_bdevperf 00:22:19.566 ************************************ 00:22:19.566 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:19.566 * Looking for test storage... 00:22:19.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.827 --rc genhtml_branch_coverage=1 00:22:19.827 --rc genhtml_function_coverage=1 00:22:19.827 --rc genhtml_legend=1 00:22:19.827 --rc geninfo_all_blocks=1 00:22:19.827 --rc geninfo_unexecuted_blocks=1 00:22:19.827 00:22:19.827 ' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.827 --rc genhtml_branch_coverage=1 00:22:19.827 --rc genhtml_function_coverage=1 00:22:19.827 --rc genhtml_legend=1 00:22:19.827 --rc geninfo_all_blocks=1 00:22:19.827 --rc geninfo_unexecuted_blocks=1 00:22:19.827 00:22:19.827 ' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.827 --rc genhtml_branch_coverage=1 00:22:19.827 --rc genhtml_function_coverage=1 00:22:19.827 --rc genhtml_legend=1 00:22:19.827 --rc geninfo_all_blocks=1 00:22:19.827 --rc geninfo_unexecuted_blocks=1 00:22:19.827 00:22:19.827 ' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.827 --rc genhtml_branch_coverage=1 00:22:19.827 --rc genhtml_function_coverage=1 00:22:19.827 --rc genhtml_legend=1 00:22:19.827 --rc geninfo_all_blocks=1 00:22:19.827 --rc geninfo_unexecuted_blocks=1 00:22:19.827 00:22:19.827 ' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77639 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77639 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77639 ']' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.827 08:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:19.827 [2024-11-22 08:42:54.885062] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:22:19.827 [2024-11-22 08:42:54.885176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77639 ] 00:22:20.087 [2024-11-22 08:42:55.068212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.346 [2024-11-22 08:42:55.173158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:20.914 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:21.173 08:42:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:21.174 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:21.174 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:21.174 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:21.174 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:21.174 08:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:21.174 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.174 { 00:22:21.174 "name": "nvme0n1", 00:22:21.174 "aliases": [ 00:22:21.174 "44e81d21-1590-4997-8e3e-8492aaaaeb74" 00:22:21.174 ], 00:22:21.174 "product_name": "NVMe disk", 00:22:21.174 "block_size": 4096, 00:22:21.174 "num_blocks": 1310720, 00:22:21.174 "uuid": "44e81d21-1590-4997-8e3e-8492aaaaeb74", 00:22:21.174 "numa_id": -1, 00:22:21.174 "assigned_rate_limits": { 00:22:21.174 "rw_ios_per_sec": 0, 00:22:21.174 "rw_mbytes_per_sec": 0, 00:22:21.174 "r_mbytes_per_sec": 0, 00:22:21.174 "w_mbytes_per_sec": 0 00:22:21.174 }, 00:22:21.174 "claimed": true, 00:22:21.174 "claim_type": "read_many_write_one", 00:22:21.174 "zoned": false, 00:22:21.174 "supported_io_types": { 00:22:21.174 "read": true, 00:22:21.174 "write": true, 00:22:21.174 "unmap": true, 00:22:21.174 "flush": true, 00:22:21.174 "reset": true, 00:22:21.174 "nvme_admin": true, 00:22:21.174 "nvme_io": true, 00:22:21.174 "nvme_io_md": false, 00:22:21.174 "write_zeroes": true, 00:22:21.174 "zcopy": false, 00:22:21.174 "get_zone_info": false, 00:22:21.174 "zone_management": false, 00:22:21.174 "zone_append": false, 00:22:21.174 "compare": true, 00:22:21.174 "compare_and_write": false, 00:22:21.174 "abort": true, 00:22:21.174 "seek_hole": false, 00:22:21.174 "seek_data": false, 00:22:21.174 "copy": true, 00:22:21.174 "nvme_iov_md": false 00:22:21.174 }, 00:22:21.174 "driver_specific": { 00:22:21.174 "nvme": [ 00:22:21.174 { 00:22:21.174 "pci_address": "0000:00:11.0", 00:22:21.174 "trid": { 00:22:21.174 "trtype": "PCIe", 00:22:21.174 "traddr": "0000:00:11.0" 00:22:21.174 }, 00:22:21.174 "ctrlr_data": { 00:22:21.174 "cntlid": 0, 00:22:21.174 "vendor_id": "0x1b36", 00:22:21.174 "model_number": "QEMU NVMe Ctrl", 00:22:21.174 "serial_number": "12341", 00:22:21.174 "firmware_revision": "8.0.0", 00:22:21.174 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:21.174 "oacs": { 00:22:21.174 "security": 0, 00:22:21.174 "format": 1, 00:22:21.174 "firmware": 0, 00:22:21.174 "ns_manage": 1 00:22:21.174 }, 00:22:21.174 "multi_ctrlr": false, 00:22:21.174 "ana_reporting": false 00:22:21.174 }, 00:22:21.174 "vs": { 00:22:21.174 "nvme_version": "1.4" 00:22:21.174 }, 00:22:21.174 "ns_data": { 00:22:21.174 "id": 1, 00:22:21.174 "can_share": false 00:22:21.174 } 00:22:21.174 } 00:22:21.174 ], 00:22:21.174 "mp_policy": "active_passive" 00:22:21.174 } 00:22:21.174 } 00:22:21.174 ]' 00:22:21.174 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.174 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.174 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7f7d2af8-ffd3-44c6-9baf-2e463d41d3df 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:21.434 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f7d2af8-ffd3-44c6-9baf-2e463d41d3df 00:22:21.693 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:21.953 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=a0a42ede-2a61-4871-80cc-ed0868c0cf29 00:22:21.953 08:42:56 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a0a42ede-2a61-4871-80cc-ed0868c0cf29 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:22.212 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.472 { 00:22:22.472 "name": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:22.472 "aliases": [ 00:22:22.472 "lvs/nvme0n1p0" 00:22:22.472 ], 00:22:22.472 "product_name": "Logical Volume", 00:22:22.472 "block_size": 4096, 00:22:22.472 "num_blocks": 26476544, 00:22:22.472 "uuid": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:22.472 "assigned_rate_limits": { 00:22:22.472 "rw_ios_per_sec": 0, 00:22:22.472 "rw_mbytes_per_sec": 0, 00:22:22.472 "r_mbytes_per_sec": 0, 00:22:22.472 "w_mbytes_per_sec": 0 00:22:22.472 }, 00:22:22.472 "claimed": false, 00:22:22.472 "zoned": false, 00:22:22.472 "supported_io_types": { 00:22:22.472 "read": true, 00:22:22.472 "write": true, 00:22:22.472 "unmap": true, 00:22:22.472 "flush": false, 00:22:22.472 "reset": true, 00:22:22.472 "nvme_admin": false, 00:22:22.472 "nvme_io": false, 00:22:22.472 "nvme_io_md": false, 00:22:22.472 "write_zeroes": true, 00:22:22.472 "zcopy": false, 00:22:22.472 "get_zone_info": false, 00:22:22.472 "zone_management": false, 00:22:22.472 "zone_append": false, 00:22:22.472 "compare": false, 00:22:22.472 "compare_and_write": false, 00:22:22.472 "abort": false, 00:22:22.472 "seek_hole": true, 00:22:22.472 "seek_data": true, 00:22:22.472 "copy": false, 00:22:22.472 "nvme_iov_md": false 00:22:22.472 }, 00:22:22.472 "driver_specific": { 00:22:22.472 "lvol": { 00:22:22.472 "lvol_store_uuid": "a0a42ede-2a61-4871-80cc-ed0868c0cf29", 00:22:22.472 "base_bdev": "nvme0n1", 00:22:22.472 "thin_provision": true, 00:22:22.472 "num_allocated_clusters": 0, 00:22:22.472 "snapshot": false, 00:22:22.472 "clone": false, 00:22:22.472 "esnap_clone": false 00:22:22.472 } 00:22:22.472 } 00:22:22.472 } 00:22:22.472 ]' 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:22.472 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:22.732 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:23.004 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.004 { 00:22:23.004 "name": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:23.004 "aliases": [ 00:22:23.004 "lvs/nvme0n1p0" 00:22:23.004 ], 00:22:23.004 "product_name": "Logical Volume", 00:22:23.004 "block_size": 4096, 00:22:23.004 "num_blocks": 26476544, 00:22:23.004 "uuid": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:23.004 "assigned_rate_limits": { 00:22:23.004 "rw_ios_per_sec": 0, 00:22:23.004 "rw_mbytes_per_sec": 0, 00:22:23.004 "r_mbytes_per_sec": 0, 00:22:23.004 "w_mbytes_per_sec": 0 00:22:23.004 }, 00:22:23.004 "claimed": false, 00:22:23.004 "zoned": false, 00:22:23.004 "supported_io_types": { 00:22:23.004 "read": true, 00:22:23.004 "write": true, 00:22:23.004 "unmap": true, 00:22:23.004 "flush": false, 00:22:23.004 "reset": true, 00:22:23.004 "nvme_admin": false, 00:22:23.004 "nvme_io": false, 00:22:23.004 "nvme_io_md": false, 00:22:23.004 "write_zeroes": true, 00:22:23.004 "zcopy": false, 00:22:23.004 "get_zone_info": false, 00:22:23.004 "zone_management": false, 00:22:23.004 "zone_append": false, 00:22:23.004 "compare": false, 00:22:23.004 "compare_and_write": false, 00:22:23.004 "abort": false, 00:22:23.004 "seek_hole": true, 00:22:23.004 "seek_data": true, 00:22:23.004 "copy": false, 00:22:23.004 "nvme_iov_md": false 00:22:23.004 }, 00:22:23.004 "driver_specific": { 00:22:23.004 "lvol": { 00:22:23.004 "lvol_store_uuid": "a0a42ede-2a61-4871-80cc-ed0868c0cf29", 00:22:23.004 "base_bdev": "nvme0n1", 00:22:23.004 "thin_provision": true, 00:22:23.004 "num_allocated_clusters": 0, 00:22:23.004 "snapshot": false, 00:22:23.004 "clone": false, 00:22:23.005 "esnap_clone": false 00:22:23.005 } 00:22:23.005 } 00:22:23.005 } 00:22:23.005 ]' 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:23.005 08:42:57 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:23.298 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdfcebcc-c44b-4117-bade-be32e73b7d87 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.558 { 00:22:23.558 "name": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:23.558 "aliases": [ 00:22:23.558 "lvs/nvme0n1p0" 00:22:23.558 ], 00:22:23.558 "product_name": "Logical Volume", 00:22:23.558 "block_size": 4096, 00:22:23.558 "num_blocks": 26476544, 00:22:23.558 "uuid": "cdfcebcc-c44b-4117-bade-be32e73b7d87", 00:22:23.558 "assigned_rate_limits": { 00:22:23.558 "rw_ios_per_sec": 0, 00:22:23.558 "rw_mbytes_per_sec": 0, 00:22:23.558 "r_mbytes_per_sec": 0, 00:22:23.558 "w_mbytes_per_sec": 0 00:22:23.558 }, 00:22:23.558 "claimed": false, 00:22:23.558 "zoned": false, 00:22:23.558 "supported_io_types": { 00:22:23.558 "read": true, 00:22:23.558 "write": true, 00:22:23.558 "unmap": true, 00:22:23.558 "flush": false, 00:22:23.558 "reset": true, 00:22:23.558 "nvme_admin": false, 00:22:23.558 "nvme_io": false, 00:22:23.558 "nvme_io_md": false, 00:22:23.558 "write_zeroes": true, 00:22:23.558 "zcopy": false, 00:22:23.558 "get_zone_info": false, 00:22:23.558 "zone_management": false, 00:22:23.558 "zone_append": false, 00:22:23.558 "compare": false, 00:22:23.558 "compare_and_write": false, 00:22:23.558 "abort": false, 00:22:23.558 "seek_hole": true, 00:22:23.558 "seek_data": true, 00:22:23.558 "copy": false, 00:22:23.558 "nvme_iov_md": false 00:22:23.558 }, 00:22:23.558 "driver_specific": { 00:22:23.558 "lvol": { 00:22:23.558 "lvol_store_uuid": "a0a42ede-2a61-4871-80cc-ed0868c0cf29", 00:22:23.558 "base_bdev": "nvme0n1", 00:22:23.558 "thin_provision": true, 00:22:23.558 "num_allocated_clusters": 0, 00:22:23.558 "snapshot": false, 00:22:23.558 "clone": false, 00:22:23.558 "esnap_clone": false 00:22:23.558 } 00:22:23.558 } 00:22:23.558 } 00:22:23.558 ]' 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:23.558 08:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cdfcebcc-c44b-4117-bade-be32e73b7d87 -c nvc0n1p0 --l2p_dram_limit 20 00:22:23.819 [2024-11-22 08:42:58.680525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.680581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:23.819 [2024-11-22 08:42:58.680597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:23.819 [2024-11-22 08:42:58.680627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.680698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.680716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.819 [2024-11-22 08:42:58.680727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:23.819 [2024-11-22 08:42:58.680746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.680766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:23.819 [2024-11-22 08:42:58.681878] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:23.819 [2024-11-22 08:42:58.681916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.681931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.819 [2024-11-22 08:42:58.681942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:22:23.819 [2024-11-22 08:42:58.681963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.682045] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 70b17d20-156a-4817-bd87-b122fc9be175 00:22:23.819 [2024-11-22 08:42:58.683500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.683537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:23.819 [2024-11-22 08:42:58.683552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:23.819 [2024-11-22 08:42:58.683568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.691265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.691295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.819 [2024-11-22 08:42:58.691310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.668 ms 00:22:23.819 [2024-11-22 08:42:58.691321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.691420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.691434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.819 [2024-11-22 08:42:58.691451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:23.819 [2024-11-22 08:42:58.691461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.691526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.691538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:23.819 [2024-11-22 08:42:58.691551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:23.819 [2024-11-22 08:42:58.691562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.691587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:23.819 [2024-11-22 08:42:58.696511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.696550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.819 [2024-11-22 08:42:58.696562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:22:23.819 [2024-11-22 08:42:58.696576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.696611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.696624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:23.819 [2024-11-22 08:42:58.696634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:23.819 [2024-11-22 08:42:58.696647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.696679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:23.819 [2024-11-22 08:42:58.696807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:23.819 [2024-11-22 08:42:58.696821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:23.819 [2024-11-22 08:42:58.696837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:23.819 [2024-11-22 08:42:58.696850] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:23.819 [2024-11-22 08:42:58.696864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:23.819 [2024-11-22 08:42:58.696875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:23.819 [2024-11-22 08:42:58.696888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:23.819 [2024-11-22 08:42:58.696897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:23.819 [2024-11-22 08:42:58.696910] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:23.819 [2024-11-22 08:42:58.696920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.696935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:23.819 [2024-11-22 08:42:58.696945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:22:23.819 [2024-11-22 08:42:58.696980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.697050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.819 [2024-11-22 08:42:58.697065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:23.819 [2024-11-22 08:42:58.697093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:23.819 [2024-11-22 08:42:58.697108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.819 [2024-11-22 08:42:58.697183] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:23.819 [2024-11-22 08:42:58.697198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:23.819 [2024-11-22 08:42:58.697211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:23.819 [2024-11-22 08:42:58.697249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:23.819 [2024-11-22 08:42:58.697280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.819 [2024-11-22 08:42:58.697301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:23.819 [2024-11-22 08:42:58.697313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:23.819 [2024-11-22 08:42:58.697322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.819 [2024-11-22 08:42:58.697345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:23.819 [2024-11-22 08:42:58.697355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:23.819 [2024-11-22 08:42:58.697369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:23.819 [2024-11-22 08:42:58.697389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:23.819 [2024-11-22 08:42:58.697421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:23.819 [2024-11-22 08:42:58.697453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:23.819 [2024-11-22 08:42:58.697482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:23.819 [2024-11-22 08:42:58.697514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:23.819 [2024-11-22 08:42:58.697536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:23.819 [2024-11-22 08:42:58.697546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:23.819 [2024-11-22 08:42:58.697557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.819 [2024-11-22 08:42:58.697566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:23.820 [2024-11-22 08:42:58.697578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:23.820 [2024-11-22 08:42:58.697592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.820 [2024-11-22 08:42:58.697604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:23.820 [2024-11-22 08:42:58.697614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:23.820 [2024-11-22 08:42:58.697625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.820 [2024-11-22 08:42:58.697634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:23.820 [2024-11-22 08:42:58.697646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:23.820 [2024-11-22 08:42:58.697655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.820 [2024-11-22 08:42:58.697666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:23.820 [2024-11-22 08:42:58.697676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:23.820 [2024-11-22 08:42:58.697688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.820 [2024-11-22 08:42:58.697697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.820 [2024-11-22 08:42:58.697714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:23.820 [2024-11-22 08:42:58.697723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:23.820 [2024-11-22 08:42:58.697735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:23.820 [2024-11-22 08:42:58.697744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:23.820 [2024-11-22 08:42:58.697756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:23.820 [2024-11-22 08:42:58.697765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:23.820 [2024-11-22 08:42:58.697781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:23.820 [2024-11-22 08:42:58.697794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.697808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:23.820 [2024-11-22 08:42:58.697819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:23.820 [2024-11-22 08:42:58.697833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:23.820 [2024-11-22 08:42:58.697844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:23.820 [2024-11-22 08:42:58.697857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:23.820 [2024-11-22 08:42:58.697867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:23.820 [2024-11-22 08:42:58.697881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:23.820 [2024-11-22 08:42:58.697891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:23.820 [2024-11-22 08:42:58.697906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:23.820 [2024-11-22 08:42:58.697916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.697928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.697938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.697950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.697972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:23.820 [2024-11-22 08:42:58.697985] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:23.820 [2024-11-22 08:42:58.697997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.698011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:23.820 [2024-11-22 08:42:58.698022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:23.820 [2024-11-22 08:42:58.698035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:23.820 [2024-11-22 08:42:58.698045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:23.820 [2024-11-22 08:42:58.698059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.820 [2024-11-22 08:42:58.698072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:23.820 [2024-11-22 08:42:58.698085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:22:23.820 [2024-11-22 08:42:58.698095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.820 [2024-11-22 08:42:58.698135] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:23.820 [2024-11-22 08:42:58.698149] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:28.014 [2024-11-22 08:43:02.265896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.266186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:28.014 [2024-11-22 08:43:02.266224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3573.552 ms 00:22:28.014 [2024-11-22 08:43:02.266236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.303850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.304083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.014 [2024-11-22 08:43:02.304115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.417 ms 00:22:28.014 [2024-11-22 08:43:02.304127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.304284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.304299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.014 [2024-11-22 08:43:02.304316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:28.014 [2024-11-22 08:43:02.304327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.358236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.358278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.014 [2024-11-22 08:43:02.358295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.955 ms 00:22:28.014 [2024-11-22 08:43:02.358305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.358344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.358355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.014 [2024-11-22 08:43:02.358367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:28.014 [2024-11-22 08:43:02.358380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.358879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.358893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.014 [2024-11-22 08:43:02.358906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:22:28.014 [2024-11-22 08:43:02.358916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.359037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.359051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.014 [2024-11-22 08:43:02.359067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:28.014 [2024-11-22 08:43:02.359077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.378979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.379015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.014 [2024-11-22 08:43:02.379031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.910 ms 00:22:28.014 [2024-11-22 08:43:02.379060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.390833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:28.014 [2024-11-22 08:43:02.396752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.396787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.014 [2024-11-22 08:43:02.396799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.630 ms 00:22:28.014 [2024-11-22 08:43:02.396828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.490044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.490100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:28.014 [2024-11-22 08:43:02.490115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.336 ms 00:22:28.014 [2024-11-22 08:43:02.490129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.490301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.490320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.014 [2024-11-22 08:43:02.490330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:22:28.014 [2024-11-22 08:43:02.490346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.525150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.525204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:28.014 [2024-11-22 08:43:02.525218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.797 ms 00:22:28.014 [2024-11-22 08:43:02.525231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.558750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.558791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:28.014 [2024-11-22 08:43:02.558805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.534 ms 00:22:28.014 [2024-11-22 08:43:02.558834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.559563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.559599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.014 [2024-11-22 08:43:02.559611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:22:28.014 [2024-11-22 08:43:02.559624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.657738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.657931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:28.014 [2024-11-22 08:43:02.657984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.221 ms 00:22:28.014 [2024-11-22 08:43:02.657999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.692828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.692871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:28.014 [2024-11-22 08:43:02.692887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.807 ms 00:22:28.014 [2024-11-22 08:43:02.692908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.727418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.727458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:28.014 [2024-11-22 08:43:02.727471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.528 ms 00:22:28.014 [2024-11-22 08:43:02.727482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.761722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.761767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.014 [2024-11-22 08:43:02.761780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.258 ms 00:22:28.014 [2024-11-22 08:43:02.761792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.761832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.761849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.014 [2024-11-22 08:43:02.761859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.014 [2024-11-22 08:43:02.761871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.761984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.014 [2024-11-22 08:43:02.762017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.014 [2024-11-22 08:43:02.762027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:28.014 [2024-11-22 08:43:02.762040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.014 [2024-11-22 08:43:02.763027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4088.666 ms, result 0 00:22:28.014 { 00:22:28.014 "name": "ftl0", 00:22:28.014 "uuid": "70b17d20-156a-4817-bd87-b122fc9be175" 00:22:28.014 } 00:22:28.014 08:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:28.014 08:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:28.014 08:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:28.014 08:43:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:28.274 [2024-11-22 08:43:03.118835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:28.274 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:28.274 Zero copy mechanism will not be used. 00:22:28.274 Running I/O for 4 seconds... 00:22:30.153 1365.00 IOPS, 90.64 MiB/s [2024-11-22T08:43:06.179Z] 1405.50 IOPS, 93.33 MiB/s [2024-11-22T08:43:07.559Z] 1435.67 IOPS, 95.34 MiB/s [2024-11-22T08:43:07.559Z] 1448.50 IOPS, 96.19 MiB/s 00:22:32.472 Latency(us) 00:22:32.472 [2024-11-22T08:43:07.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.472 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:32.472 ftl0 : 4.00 1448.21 96.17 0.00 0.00 730.56 236.88 2131.89 00:22:32.472 [2024-11-22T08:43:07.559Z] =================================================================================================================== 00:22:32.472 [2024-11-22T08:43:07.559Z] Total : 1448.21 96.17 0.00 0.00 730.56 236.88 2131.89 00:22:32.472 { 00:22:32.472 "results": [ 00:22:32.472 { 00:22:32.472 "job": "ftl0", 00:22:32.472 "core_mask": "0x1", 00:22:32.472 "workload": "randwrite", 00:22:32.472 "status": "finished", 00:22:32.472 "queue_depth": 1, 00:22:32.472 "io_size": 69632, 00:22:32.472 "runtime": 4.001494, 00:22:32.472 "iops": 1448.2090939034272, 00:22:32.472 "mibps": 96.17013514202446, 00:22:32.472 "io_failed": 0, 00:22:32.472 "io_timeout": 0, 00:22:32.472 "avg_latency_us": 730.5624072822784, 00:22:32.472 "min_latency_us": 236.87710843373495, 00:22:32.472 "max_latency_us": 2131.8939759036143 00:22:32.472 } 00:22:32.472 ], 00:22:32.472 "core_count": 1 00:22:32.472 } 00:22:32.472 [2024-11-22 08:43:07.123036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:32.472 08:43:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:32.472 [2024-11-22 08:43:07.239087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:32.472 Running I/O for 4 seconds... 00:22:34.347 12123.00 IOPS, 47.36 MiB/s [2024-11-22T08:43:10.371Z] 11963.50 IOPS, 46.73 MiB/s [2024-11-22T08:43:11.310Z] 11725.67 IOPS, 45.80 MiB/s [2024-11-22T08:43:11.310Z] 11701.75 IOPS, 45.71 MiB/s 00:22:36.223 Latency(us) 00:22:36.223 [2024-11-22T08:43:11.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.223 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:36.223 ftl0 : 4.02 11687.57 45.65 0.00 0.00 10929.69 200.69 31162.50 00:22:36.223 [2024-11-22T08:43:11.310Z] =================================================================================================================== 00:22:36.223 [2024-11-22T08:43:11.310Z] Total : 11687.57 45.65 0.00 0.00 10929.69 0.00 31162.50 00:22:36.223 [2024-11-22 08:43:11.257655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:36.223 { 00:22:36.223 "results": [ 00:22:36.223 { 00:22:36.223 "job": "ftl0", 00:22:36.223 "core_mask": "0x1", 00:22:36.223 "workload": "randwrite", 00:22:36.223 "status": "finished", 00:22:36.223 "queue_depth": 128, 00:22:36.223 "io_size": 4096, 00:22:36.223 "runtime": 4.015547, 00:22:36.223 "iops": 11687.573324381461, 00:22:36.223 "mibps": 45.65458329836508, 00:22:36.223 "io_failed": 0, 00:22:36.223 "io_timeout": 0, 00:22:36.223 "avg_latency_us": 10929.685231884669, 00:22:36.223 "min_latency_us": 200.6875502008032, 00:22:36.223 "max_latency_us": 31162.499598393573 00:22:36.223 } 00:22:36.223 ], 00:22:36.223 "core_count": 1 00:22:36.223 } 00:22:36.223 08:43:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:36.482 [2024-11-22 08:43:11.382151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:36.482 Running I/O for 4 seconds... 00:22:38.354 9290.00 IOPS, 36.29 MiB/s [2024-11-22T08:43:14.818Z] 9388.50 IOPS, 36.67 MiB/s [2024-11-22T08:43:15.755Z] 9426.00 IOPS, 36.82 MiB/s [2024-11-22T08:43:15.755Z] 9486.25 IOPS, 37.06 MiB/s 00:22:40.668 Latency(us) 00:22:40.668 [2024-11-22T08:43:15.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.668 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:40.668 Verification LBA range: start 0x0 length 0x1400000 00:22:40.668 ftl0 : 4.01 9495.78 37.09 0.00 0.00 13438.68 243.46 17160.43 00:22:40.668 [2024-11-22T08:43:15.755Z] =================================================================================================================== 00:22:40.668 [2024-11-22T08:43:15.755Z] Total : 9495.78 37.09 0.00 0.00 13438.68 0.00 17160.43 00:22:40.668 [2024-11-22 08:43:15.403544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:40.669 { 00:22:40.669 "results": [ 00:22:40.669 { 00:22:40.669 "job": "ftl0", 00:22:40.669 "core_mask": "0x1", 00:22:40.669 "workload": "verify", 00:22:40.669 "status": "finished", 00:22:40.669 "verify_range": { 00:22:40.669 "start": 0, 00:22:40.669 "length": 20971520 00:22:40.669 }, 00:22:40.669 "queue_depth": 128, 00:22:40.669 "io_size": 4096, 00:22:40.669 "runtime": 4.009467, 00:22:40.669 "iops": 9495.775872453869, 00:22:40.669 "mibps": 37.092874501772926, 00:22:40.669 "io_failed": 0, 00:22:40.669 "io_timeout": 0, 00:22:40.669 "avg_latency_us": 13438.675097817266, 00:22:40.669 "min_latency_us": 243.4570281124498, 00:22:40.669 "max_latency_us": 17160.430522088354 00:22:40.669 } 00:22:40.669 ], 00:22:40.669 "core_count": 1 00:22:40.669 } 00:22:40.669 08:43:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:40.669 [2024-11-22 08:43:15.594223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.669 [2024-11-22 08:43:15.594281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.669 [2024-11-22 08:43:15.594296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:40.669 [2024-11-22 08:43:15.594324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.669 [2024-11-22 08:43:15.594353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.669 [2024-11-22 08:43:15.598434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.669 [2024-11-22 08:43:15.598463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.669 [2024-11-22 08:43:15.598478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:22:40.669 [2024-11-22 08:43:15.598492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.669 [2024-11-22 08:43:15.600586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.669 [2024-11-22 08:43:15.600742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.669 [2024-11-22 08:43:15.600775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:22:40.669 [2024-11-22 08:43:15.600789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.797173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.797221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.929 [2024-11-22 08:43:15.797243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 196.669 ms 00:22:40.929 [2024-11-22 08:43:15.797255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.802238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.802273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.929 [2024-11-22 08:43:15.802288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.948 ms 00:22:40.929 [2024-11-22 08:43:15.802314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.839033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.839230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.929 [2024-11-22 08:43:15.839258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.701 ms 00:22:40.929 [2024-11-22 08:43:15.839269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.861603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.861647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.929 [2024-11-22 08:43:15.861664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.266 ms 00:22:40.929 [2024-11-22 08:43:15.861675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.861833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.861850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.929 [2024-11-22 08:43:15.861867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:40.929 [2024-11-22 08:43:15.861877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.898402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.898441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.929 [2024-11-22 08:43:15.898458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.564 ms 00:22:40.929 [2024-11-22 08:43:15.898468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.933673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.933829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.929 [2024-11-22 08:43:15.933871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.219 ms 00:22:40.929 [2024-11-22 08:43:15.933881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:15.968850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:15.968887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.929 [2024-11-22 08:43:15.968903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.916 ms 00:22:40.929 [2024-11-22 08:43:15.968928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:16.002965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-22 08:43:16.003001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.929 [2024-11-22 08:43:16.003020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.979 ms 00:22:40.929 [2024-11-22 08:43:16.003046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-22 08:43:16.003086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.929 [2024-11-22 08:43:16.003117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.929 [2024-11-22 08:43:16.003744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.003998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.930 [2024-11-22 08:43:16.004350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.930 [2024-11-22 08:43:16.004374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 70b17d20-156a-4817-bd87-b122fc9be175 00:22:40.930 [2024-11-22 08:43:16.004385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.930 [2024-11-22 08:43:16.004400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.930 [2024-11-22 08:43:16.004409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.930 [2024-11-22 08:43:16.004421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.930 [2024-11-22 08:43:16.004431] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.930 [2024-11-22 08:43:16.004446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.930 [2024-11-22 08:43:16.004455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.930 [2024-11-22 08:43:16.004469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.930 [2024-11-22 08:43:16.004478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.930 [2024-11-22 08:43:16.004490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.930 [2024-11-22 08:43:16.004500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.930 [2024-11-22 08:43:16.004513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:22:40.930 [2024-11-22 08:43:16.004522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.023587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.189 [2024-11-22 08:43:16.023622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:41.189 [2024-11-22 08:43:16.023637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.040 ms 00:22:41.189 [2024-11-22 08:43:16.023662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.024255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.189 [2024-11-22 08:43:16.024275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:41.189 [2024-11-22 08:43:16.024289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:22:41.189 [2024-11-22 08:43:16.024301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.076261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.189 [2024-11-22 08:43:16.076299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.189 [2024-11-22 08:43:16.076316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.189 [2024-11-22 08:43:16.076342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.076399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.189 [2024-11-22 08:43:16.076410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.189 [2024-11-22 08:43:16.076423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.189 [2024-11-22 08:43:16.076433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.076512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.189 [2024-11-22 08:43:16.076525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.189 [2024-11-22 08:43:16.076538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.189 [2024-11-22 08:43:16.076548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.076567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.189 [2024-11-22 08:43:16.076577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.189 [2024-11-22 08:43:16.076588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.189 [2024-11-22 08:43:16.076598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.189 [2024-11-22 08:43:16.193745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.189 [2024-11-22 08:43:16.193807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.189 [2024-11-22 08:43:16.193842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.189 [2024-11-22 08:43:16.193853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.448 [2024-11-22 08:43:16.288358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.448 [2024-11-22 08:43:16.288558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.448 [2024-11-22 08:43:16.288648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.448 [2024-11-22 08:43:16.288803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:41.448 [2024-11-22 08:43:16.288876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.288924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.288935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.448 [2024-11-22 08:43:16.288950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.288986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.289038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.448 [2024-11-22 08:43:16.289059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.448 [2024-11-22 08:43:16.289072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.448 [2024-11-22 08:43:16.289082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.448 [2024-11-22 08:43:16.289257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 696.094 ms, result 0 00:22:41.448 true 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77639 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77639 ']' 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77639 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77639 00:22:41.448 killing process with pid 77639 00:22:41.448 Received shutdown signal, test time was about 4.000000 seconds 00:22:41.448 00:22:41.448 Latency(us) 00:22:41.448 [2024-11-22T08:43:16.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.448 [2024-11-22T08:43:16.535Z] =================================================================================================================== 00:22:41.448 [2024-11-22T08:43:16.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77639' 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77639 00:22:41.448 08:43:16 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77639 00:22:45.665 Remove shared memory files 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:45.665 ************************************ 00:22:45.665 END TEST ftl_bdevperf 00:22:45.665 ************************************ 00:22:45.665 00:22:45.665 real 0m25.371s 00:22:45.665 user 0m27.873s 00:22:45.665 sys 0m1.265s 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.665 08:43:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:45.665 08:43:19 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:45.665 08:43:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:45.665 08:43:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.665 08:43:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:45.665 ************************************ 00:22:45.665 START TEST ftl_trim 00:22:45.665 ************************************ 00:22:45.665 08:43:19 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:45.665 * Looking for test storage... 00:22:45.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.665 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.665 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.665 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.665 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.665 08:43:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.666 08:43:20 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.666 --rc genhtml_branch_coverage=1 00:22:45.666 --rc genhtml_function_coverage=1 00:22:45.666 --rc genhtml_legend=1 00:22:45.666 --rc geninfo_all_blocks=1 00:22:45.666 --rc geninfo_unexecuted_blocks=1 00:22:45.666 00:22:45.666 ' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.666 --rc genhtml_branch_coverage=1 00:22:45.666 --rc genhtml_function_coverage=1 00:22:45.666 --rc genhtml_legend=1 00:22:45.666 --rc geninfo_all_blocks=1 00:22:45.666 --rc geninfo_unexecuted_blocks=1 00:22:45.666 00:22:45.666 ' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.666 --rc genhtml_branch_coverage=1 00:22:45.666 --rc genhtml_function_coverage=1 00:22:45.666 --rc genhtml_legend=1 00:22:45.666 --rc geninfo_all_blocks=1 00:22:45.666 --rc geninfo_unexecuted_blocks=1 00:22:45.666 00:22:45.666 ' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.666 --rc genhtml_branch_coverage=1 00:22:45.666 --rc genhtml_function_coverage=1 00:22:45.666 --rc genhtml_legend=1 00:22:45.666 --rc geninfo_all_blocks=1 00:22:45.666 --rc geninfo_unexecuted_blocks=1 00:22:45.666 00:22:45.666 ' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78005 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:45.666 08:43:20 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78005 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78005 ']' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.666 08:43:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:45.666 [2024-11-22 08:43:20.346901] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:22:45.666 [2024-11-22 08:43:20.347126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78005 ] 00:22:45.666 [2024-11-22 08:43:20.543812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.666 [2024-11-22 08:43:20.661485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.666 [2024-11-22 08:43:20.661626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.666 [2024-11-22 08:43:20.661662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.636 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.636 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:46.636 08:43:21 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:46.895 08:43:21 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:46.895 08:43:21 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:46.895 08:43:21 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:46.895 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:46.895 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:46.895 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:46.895 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:46.895 08:43:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:47.154 { 00:22:47.154 "name": "nvme0n1", 00:22:47.154 "aliases": [ 00:22:47.154 "c4b01638-06d5-4e0a-8a57-8ffa8969874a" 00:22:47.154 ], 00:22:47.154 "product_name": "NVMe disk", 00:22:47.154 "block_size": 4096, 00:22:47.154 "num_blocks": 1310720, 00:22:47.154 "uuid": "c4b01638-06d5-4e0a-8a57-8ffa8969874a", 00:22:47.154 "numa_id": -1, 00:22:47.154 "assigned_rate_limits": { 00:22:47.154 "rw_ios_per_sec": 0, 00:22:47.154 "rw_mbytes_per_sec": 0, 00:22:47.154 "r_mbytes_per_sec": 0, 00:22:47.154 "w_mbytes_per_sec": 0 00:22:47.154 }, 00:22:47.154 "claimed": true, 00:22:47.154 "claim_type": "read_many_write_one", 00:22:47.154 "zoned": false, 00:22:47.154 "supported_io_types": { 00:22:47.154 "read": true, 00:22:47.154 "write": true, 00:22:47.154 "unmap": true, 00:22:47.154 "flush": true, 00:22:47.154 "reset": true, 00:22:47.154 "nvme_admin": true, 00:22:47.154 "nvme_io": true, 00:22:47.154 "nvme_io_md": false, 00:22:47.154 "write_zeroes": true, 00:22:47.154 "zcopy": false, 00:22:47.154 "get_zone_info": false, 00:22:47.154 "zone_management": false, 00:22:47.154 "zone_append": false, 00:22:47.154 "compare": true, 00:22:47.154 "compare_and_write": false, 00:22:47.154 "abort": true, 00:22:47.154 "seek_hole": false, 00:22:47.154 "seek_data": false, 00:22:47.154 "copy": true, 00:22:47.154 "nvme_iov_md": false 00:22:47.154 }, 00:22:47.154 "driver_specific": { 00:22:47.154 "nvme": [ 00:22:47.154 { 00:22:47.154 "pci_address": "0000:00:11.0", 00:22:47.154 "trid": { 00:22:47.154 "trtype": "PCIe", 00:22:47.154 "traddr": "0000:00:11.0" 00:22:47.154 }, 00:22:47.154 "ctrlr_data": { 00:22:47.154 "cntlid": 0, 00:22:47.154 "vendor_id": "0x1b36", 00:22:47.154 "model_number": "QEMU NVMe Ctrl", 00:22:47.154 "serial_number": "12341", 00:22:47.154 "firmware_revision": "8.0.0", 00:22:47.154 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.154 "oacs": { 00:22:47.154 "security": 0, 00:22:47.154 "format": 1, 00:22:47.154 "firmware": 0, 00:22:47.154 "ns_manage": 1 00:22:47.154 }, 00:22:47.154 "multi_ctrlr": false, 00:22:47.154 "ana_reporting": false 00:22:47.154 }, 00:22:47.154 "vs": { 00:22:47.154 "nvme_version": "1.4" 00:22:47.154 }, 00:22:47.154 "ns_data": { 00:22:47.154 "id": 1, 00:22:47.154 "can_share": false 00:22:47.154 } 00:22:47.154 } 00:22:47.154 ], 00:22:47.154 "mp_policy": "active_passive" 00:22:47.154 } 00:22:47.154 } 00:22:47.154 ]' 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:47.154 08:43:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:47.154 08:43:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:47.154 08:43:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.154 08:43:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:47.154 08:43:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.154 08:43:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.413 08:43:22 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=a0a42ede-2a61-4871-80cc-ed0868c0cf29 00:22:47.413 08:43:22 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.413 08:43:22 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0a42ede-2a61-4871-80cc-ed0868c0cf29 00:22:47.673 08:43:22 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:47.932 08:43:22 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=70ac0eee-d93c-44d5-bfac-d79903346d07 00:22:47.932 08:43:22 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 70ac0eee-d93c-44d5-bfac-d79903346d07 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:48.191 08:43:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.191 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.191 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:48.191 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:48.191 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:48.191 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:48.451 { 00:22:48.451 "name": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:48.451 "aliases": [ 00:22:48.451 "lvs/nvme0n1p0" 00:22:48.451 ], 00:22:48.451 "product_name": "Logical Volume", 00:22:48.451 "block_size": 4096, 00:22:48.451 "num_blocks": 26476544, 00:22:48.451 "uuid": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:48.451 "assigned_rate_limits": { 00:22:48.451 "rw_ios_per_sec": 0, 00:22:48.451 "rw_mbytes_per_sec": 0, 00:22:48.451 "r_mbytes_per_sec": 0, 00:22:48.451 "w_mbytes_per_sec": 0 00:22:48.451 }, 00:22:48.451 "claimed": false, 00:22:48.451 "zoned": false, 00:22:48.451 "supported_io_types": { 00:22:48.451 "read": true, 00:22:48.451 "write": true, 00:22:48.451 "unmap": true, 00:22:48.451 "flush": false, 00:22:48.451 "reset": true, 00:22:48.451 "nvme_admin": false, 00:22:48.451 "nvme_io": false, 00:22:48.451 "nvme_io_md": false, 00:22:48.451 "write_zeroes": true, 00:22:48.451 "zcopy": false, 00:22:48.451 "get_zone_info": false, 00:22:48.451 "zone_management": false, 00:22:48.451 "zone_append": false, 00:22:48.451 "compare": false, 00:22:48.451 "compare_and_write": false, 00:22:48.451 "abort": false, 00:22:48.451 "seek_hole": true, 00:22:48.451 "seek_data": true, 00:22:48.451 "copy": false, 00:22:48.451 "nvme_iov_md": false 00:22:48.451 }, 00:22:48.451 "driver_specific": { 00:22:48.451 "lvol": { 00:22:48.451 "lvol_store_uuid": "70ac0eee-d93c-44d5-bfac-d79903346d07", 00:22:48.451 "base_bdev": "nvme0n1", 00:22:48.451 "thin_provision": true, 00:22:48.451 "num_allocated_clusters": 0, 00:22:48.451 "snapshot": false, 00:22:48.451 "clone": false, 00:22:48.451 "esnap_clone": false 00:22:48.451 } 00:22:48.451 } 00:22:48.451 } 00:22:48.451 ]' 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:48.451 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:48.451 08:43:23 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:48.451 08:43:23 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:48.451 08:43:23 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:48.711 08:43:23 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:48.711 08:43:23 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:48.711 08:43:23 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.711 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.711 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:48.711 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:48.711 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:48.711 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5e40c0-353b-4633-86de-186744bb609d 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:48.971 { 00:22:48.971 "name": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:48.971 "aliases": [ 00:22:48.971 "lvs/nvme0n1p0" 00:22:48.971 ], 00:22:48.971 "product_name": "Logical Volume", 00:22:48.971 "block_size": 4096, 00:22:48.971 "num_blocks": 26476544, 00:22:48.971 "uuid": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:48.971 "assigned_rate_limits": { 00:22:48.971 "rw_ios_per_sec": 0, 00:22:48.971 "rw_mbytes_per_sec": 0, 00:22:48.971 "r_mbytes_per_sec": 0, 00:22:48.971 "w_mbytes_per_sec": 0 00:22:48.971 }, 00:22:48.971 "claimed": false, 00:22:48.971 "zoned": false, 00:22:48.971 "supported_io_types": { 00:22:48.971 "read": true, 00:22:48.971 "write": true, 00:22:48.971 "unmap": true, 00:22:48.971 "flush": false, 00:22:48.971 "reset": true, 00:22:48.971 "nvme_admin": false, 00:22:48.971 "nvme_io": false, 00:22:48.971 "nvme_io_md": false, 00:22:48.971 "write_zeroes": true, 00:22:48.971 "zcopy": false, 00:22:48.971 "get_zone_info": false, 00:22:48.971 "zone_management": false, 00:22:48.971 "zone_append": false, 00:22:48.971 "compare": false, 00:22:48.971 "compare_and_write": false, 00:22:48.971 "abort": false, 00:22:48.971 "seek_hole": true, 00:22:48.971 "seek_data": true, 00:22:48.971 "copy": false, 00:22:48.971 "nvme_iov_md": false 00:22:48.971 }, 00:22:48.971 "driver_specific": { 00:22:48.971 "lvol": { 00:22:48.971 "lvol_store_uuid": "70ac0eee-d93c-44d5-bfac-d79903346d07", 00:22:48.971 "base_bdev": "nvme0n1", 00:22:48.971 "thin_provision": true, 00:22:48.971 "num_allocated_clusters": 0, 00:22:48.971 "snapshot": false, 00:22:48.971 "clone": false, 00:22:48.971 "esnap_clone": false 00:22:48.971 } 00:22:48.971 } 00:22:48.971 } 00:22:48.971 ]' 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:48.971 08:43:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:48.972 08:43:23 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:48.972 08:43:23 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:49.231 08:43:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:49.231 08:43:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:49.231 08:43:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ca5e40c0-353b-4633-86de-186744bb609d 00:22:49.231 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ca5e40c0-353b-4633-86de-186744bb609d 00:22:49.231 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:49.231 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:49.231 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:49.231 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5e40c0-353b-4633-86de-186744bb609d 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:49.491 { 00:22:49.491 "name": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:49.491 "aliases": [ 00:22:49.491 "lvs/nvme0n1p0" 00:22:49.491 ], 00:22:49.491 "product_name": "Logical Volume", 00:22:49.491 "block_size": 4096, 00:22:49.491 "num_blocks": 26476544, 00:22:49.491 "uuid": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:49.491 "assigned_rate_limits": { 00:22:49.491 "rw_ios_per_sec": 0, 00:22:49.491 "rw_mbytes_per_sec": 0, 00:22:49.491 "r_mbytes_per_sec": 0, 00:22:49.491 "w_mbytes_per_sec": 0 00:22:49.491 }, 00:22:49.491 "claimed": false, 00:22:49.491 "zoned": false, 00:22:49.491 "supported_io_types": { 00:22:49.491 "read": true, 00:22:49.491 "write": true, 00:22:49.491 "unmap": true, 00:22:49.491 "flush": false, 00:22:49.491 "reset": true, 00:22:49.491 "nvme_admin": false, 00:22:49.491 "nvme_io": false, 00:22:49.491 "nvme_io_md": false, 00:22:49.491 "write_zeroes": true, 00:22:49.491 "zcopy": false, 00:22:49.491 "get_zone_info": false, 00:22:49.491 "zone_management": false, 00:22:49.491 "zone_append": false, 00:22:49.491 "compare": false, 00:22:49.491 "compare_and_write": false, 00:22:49.491 "abort": false, 00:22:49.491 "seek_hole": true, 00:22:49.491 "seek_data": true, 00:22:49.491 "copy": false, 00:22:49.491 "nvme_iov_md": false 00:22:49.491 }, 00:22:49.491 "driver_specific": { 00:22:49.491 "lvol": { 00:22:49.491 "lvol_store_uuid": "70ac0eee-d93c-44d5-bfac-d79903346d07", 00:22:49.491 "base_bdev": "nvme0n1", 00:22:49.491 "thin_provision": true, 00:22:49.491 "num_allocated_clusters": 0, 00:22:49.491 "snapshot": false, 00:22:49.491 "clone": false, 00:22:49.491 "esnap_clone": false 00:22:49.491 } 00:22:49.491 } 00:22:49.491 } 00:22:49.491 ]' 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:49.491 08:43:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:49.491 08:43:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:49.491 08:43:24 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ca5e40c0-353b-4633-86de-186744bb609d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:49.751 [2024-11-22 08:43:24.607500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.607552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:49.752 [2024-11-22 08:43:24.607589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:49.752 [2024-11-22 08:43:24.607600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.610922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.610975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.752 [2024-11-22 08:43:24.610991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:22:49.752 [2024-11-22 08:43:24.611001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.611143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:49.752 [2024-11-22 08:43:24.612150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:49.752 [2024-11-22 08:43:24.612189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.612200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.752 [2024-11-22 08:43:24.612214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:22:49.752 [2024-11-22 08:43:24.612224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.612332] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:22:49.752 [2024-11-22 08:43:24.613775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.613811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:49.752 [2024-11-22 08:43:24.613824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:49.752 [2024-11-22 08:43:24.613837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.621342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.621378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.752 [2024-11-22 08:43:24.621410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.438 ms 00:22:49.752 [2024-11-22 08:43:24.621424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.621572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.621590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.752 [2024-11-22 08:43:24.621601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:49.752 [2024-11-22 08:43:24.621618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.621658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.621672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:49.752 [2024-11-22 08:43:24.621683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:49.752 [2024-11-22 08:43:24.621705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.621743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:49.752 [2024-11-22 08:43:24.626907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.626945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.752 [2024-11-22 08:43:24.626977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:22:49.752 [2024-11-22 08:43:24.626987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.627066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.627078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:49.752 [2024-11-22 08:43:24.627092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:49.752 [2024-11-22 08:43:24.627118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.627153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:49.752 [2024-11-22 08:43:24.627277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:49.752 [2024-11-22 08:43:24.627297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:49.752 [2024-11-22 08:43:24.627310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:49.752 [2024-11-22 08:43:24.627326] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627337] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:49.752 [2024-11-22 08:43:24.627361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:49.752 [2024-11-22 08:43:24.627373] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:49.752 [2024-11-22 08:43:24.627385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:49.752 [2024-11-22 08:43:24.627399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.627409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:49.752 [2024-11-22 08:43:24.627423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:22:49.752 [2024-11-22 08:43:24.627433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.627544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.752 [2024-11-22 08:43:24.627559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:49.752 [2024-11-22 08:43:24.627572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:49.752 [2024-11-22 08:43:24.627582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.752 [2024-11-22 08:43:24.627699] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:49.752 [2024-11-22 08:43:24.627711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:49.752 [2024-11-22 08:43:24.627724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:49.752 [2024-11-22 08:43:24.627755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:49.752 [2024-11-22 08:43:24.627788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.752 [2024-11-22 08:43:24.627809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:49.752 [2024-11-22 08:43:24.627818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:49.752 [2024-11-22 08:43:24.627830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.752 [2024-11-22 08:43:24.627839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:49.752 [2024-11-22 08:43:24.627852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:49.752 [2024-11-22 08:43:24.627862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:49.752 [2024-11-22 08:43:24.627884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:49.752 [2024-11-22 08:43:24.627919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:49.752 [2024-11-22 08:43:24.627949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:49.752 [2024-11-22 08:43:24.627976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.752 [2024-11-22 08:43:24.627985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:49.752 [2024-11-22 08:43:24.628000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.752 [2024-11-22 08:43:24.628021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:49.752 [2024-11-22 08:43:24.628031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.752 [2024-11-22 08:43:24.628051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:49.752 [2024-11-22 08:43:24.628065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.752 [2024-11-22 08:43:24.628086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:49.752 [2024-11-22 08:43:24.628096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:49.752 [2024-11-22 08:43:24.628107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.752 [2024-11-22 08:43:24.628116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:49.752 [2024-11-22 08:43:24.628128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:49.752 [2024-11-22 08:43:24.628137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:49.752 [2024-11-22 08:43:24.628157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:49.752 [2024-11-22 08:43:24.628169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628178] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:49.752 [2024-11-22 08:43:24.628190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:49.752 [2024-11-22 08:43:24.628200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.752 [2024-11-22 08:43:24.628215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.752 [2024-11-22 08:43:24.628225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:49.753 [2024-11-22 08:43:24.628240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:49.753 [2024-11-22 08:43:24.628250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:49.753 [2024-11-22 08:43:24.628262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:49.753 [2024-11-22 08:43:24.628271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:49.753 [2024-11-22 08:43:24.628282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:49.753 [2024-11-22 08:43:24.628301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:49.753 [2024-11-22 08:43:24.628316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:49.753 [2024-11-22 08:43:24.628341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:49.753 [2024-11-22 08:43:24.628352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:49.753 [2024-11-22 08:43:24.628364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:49.753 [2024-11-22 08:43:24.628374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:49.753 [2024-11-22 08:43:24.628387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:49.753 [2024-11-22 08:43:24.628397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:49.753 [2024-11-22 08:43:24.628409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:49.753 [2024-11-22 08:43:24.628419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:49.753 [2024-11-22 08:43:24.628434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:49.753 [2024-11-22 08:43:24.628489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:49.753 [2024-11-22 08:43:24.628512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:49.753 [2024-11-22 08:43:24.628538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:49.753 [2024-11-22 08:43:24.628548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:49.753 [2024-11-22 08:43:24.628561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:49.753 [2024-11-22 08:43:24.628572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.753 [2024-11-22 08:43:24.628585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:49.753 [2024-11-22 08:43:24.628595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:22:49.753 [2024-11-22 08:43:24.628608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.753 [2024-11-22 08:43:24.628693] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:49.753 [2024-11-22 08:43:24.628710] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:53.949 [2024-11-22 08:43:28.197179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.197256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:53.949 [2024-11-22 08:43:28.197273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3574.277 ms 00:22:53.949 [2024-11-22 08:43:28.197286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.235441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.235495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.949 [2024-11-22 08:43:28.235511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.788 ms 00:22:53.949 [2024-11-22 08:43:28.235524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.235707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.235731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.949 [2024-11-22 08:43:28.235743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:53.949 [2024-11-22 08:43:28.235759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.294445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.294522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.949 [2024-11-22 08:43:28.294541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.720 ms 00:22:53.949 [2024-11-22 08:43:28.294558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.294673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.294693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.949 [2024-11-22 08:43:28.294707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.949 [2024-11-22 08:43:28.294723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.295213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.295247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.949 [2024-11-22 08:43:28.295261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:22:53.949 [2024-11-22 08:43:28.295277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.295412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.295429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.949 [2024-11-22 08:43:28.295442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:53.949 [2024-11-22 08:43:28.295460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.317454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.317501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.949 [2024-11-22 08:43:28.317515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.973 ms 00:22:53.949 [2024-11-22 08:43:28.317544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.330290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:53.949 [2024-11-22 08:43:28.346848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.346902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.949 [2024-11-22 08:43:28.346919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.226 ms 00:22:53.949 [2024-11-22 08:43:28.346929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.452115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.452182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:53.949 [2024-11-22 08:43:28.452201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.199 ms 00:22:53.949 [2024-11-22 08:43:28.452212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.452451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.452465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.949 [2024-11-22 08:43:28.452482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:22:53.949 [2024-11-22 08:43:28.452492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.488571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.488612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:53.949 [2024-11-22 08:43:28.488629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.096 ms 00:22:53.949 [2024-11-22 08:43:28.488639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.523559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.523598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:53.949 [2024-11-22 08:43:28.523615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.884 ms 00:22:53.949 [2024-11-22 08:43:28.523625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.524422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.524453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.949 [2024-11-22 08:43:28.524467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:22:53.949 [2024-11-22 08:43:28.524477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.632703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.632747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:53.949 [2024-11-22 08:43:28.632771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.358 ms 00:22:53.949 [2024-11-22 08:43:28.632797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.669585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.949 [2024-11-22 08:43:28.669628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:53.949 [2024-11-22 08:43:28.669646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.729 ms 00:22:53.949 [2024-11-22 08:43:28.669656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.949 [2024-11-22 08:43:28.705486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.950 [2024-11-22 08:43:28.705527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:53.950 [2024-11-22 08:43:28.705559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.794 ms 00:22:53.950 [2024-11-22 08:43:28.705569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.950 [2024-11-22 08:43:28.741077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.950 [2024-11-22 08:43:28.741119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.950 [2024-11-22 08:43:28.741152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.478 ms 00:22:53.950 [2024-11-22 08:43:28.741178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.950 [2024-11-22 08:43:28.741270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.950 [2024-11-22 08:43:28.741287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.950 [2024-11-22 08:43:28.741303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:53.950 [2024-11-22 08:43:28.741313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.950 [2024-11-22 08:43:28.741399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.950 [2024-11-22 08:43:28.741411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.950 [2024-11-22 08:43:28.741424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:53.950 [2024-11-22 08:43:28.741438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.950 [2024-11-22 08:43:28.742495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.950 [2024-11-22 08:43:28.746713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4141.438 ms, result 0 00:22:53.950 [2024-11-22 08:43:28.747617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.950 { 00:22:53.950 "name": "ftl0", 00:22:53.950 "uuid": "735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86" 00:22:53.950 } 00:22:53.950 08:43:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:53.950 08:43:28 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:54.209 [ 00:22:54.209 { 00:22:54.209 "name": "ftl0", 00:22:54.209 "aliases": [ 00:22:54.209 "735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86" 00:22:54.209 ], 00:22:54.209 "product_name": "FTL disk", 00:22:54.209 "block_size": 4096, 00:22:54.209 "num_blocks": 23592960, 00:22:54.209 "uuid": "735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86", 00:22:54.209 "assigned_rate_limits": { 00:22:54.209 "rw_ios_per_sec": 0, 00:22:54.209 "rw_mbytes_per_sec": 0, 00:22:54.209 "r_mbytes_per_sec": 0, 00:22:54.209 "w_mbytes_per_sec": 0 00:22:54.209 }, 00:22:54.209 "claimed": false, 00:22:54.209 "zoned": false, 00:22:54.209 "supported_io_types": { 00:22:54.209 "read": true, 00:22:54.209 "write": true, 00:22:54.209 "unmap": true, 00:22:54.209 "flush": true, 00:22:54.209 "reset": false, 00:22:54.209 "nvme_admin": false, 00:22:54.209 "nvme_io": false, 00:22:54.209 "nvme_io_md": false, 00:22:54.209 "write_zeroes": true, 00:22:54.209 "zcopy": false, 00:22:54.209 "get_zone_info": false, 00:22:54.209 "zone_management": false, 00:22:54.209 "zone_append": false, 00:22:54.209 "compare": false, 00:22:54.209 "compare_and_write": false, 00:22:54.209 "abort": false, 00:22:54.209 "seek_hole": false, 00:22:54.209 "seek_data": false, 00:22:54.209 "copy": false, 00:22:54.209 "nvme_iov_md": false 00:22:54.209 }, 00:22:54.209 "driver_specific": { 00:22:54.209 "ftl": { 00:22:54.209 "base_bdev": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:54.209 "cache": "nvc0n1p0" 00:22:54.209 } 00:22:54.209 } 00:22:54.209 } 00:22:54.209 ] 00:22:54.209 08:43:29 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:54.209 08:43:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:54.209 08:43:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:54.468 08:43:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:54.468 08:43:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:54.730 08:43:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:54.730 { 00:22:54.730 "name": "ftl0", 00:22:54.730 "aliases": [ 00:22:54.730 "735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86" 00:22:54.730 ], 00:22:54.730 "product_name": "FTL disk", 00:22:54.730 "block_size": 4096, 00:22:54.730 "num_blocks": 23592960, 00:22:54.730 "uuid": "735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86", 00:22:54.730 "assigned_rate_limits": { 00:22:54.730 "rw_ios_per_sec": 0, 00:22:54.730 "rw_mbytes_per_sec": 0, 00:22:54.730 "r_mbytes_per_sec": 0, 00:22:54.730 "w_mbytes_per_sec": 0 00:22:54.730 }, 00:22:54.730 "claimed": false, 00:22:54.730 "zoned": false, 00:22:54.730 "supported_io_types": { 00:22:54.730 "read": true, 00:22:54.730 "write": true, 00:22:54.730 "unmap": true, 00:22:54.730 "flush": true, 00:22:54.730 "reset": false, 00:22:54.730 "nvme_admin": false, 00:22:54.730 "nvme_io": false, 00:22:54.730 "nvme_io_md": false, 00:22:54.730 "write_zeroes": true, 00:22:54.730 "zcopy": false, 00:22:54.730 "get_zone_info": false, 00:22:54.730 "zone_management": false, 00:22:54.730 "zone_append": false, 00:22:54.730 "compare": false, 00:22:54.730 "compare_and_write": false, 00:22:54.730 "abort": false, 00:22:54.730 "seek_hole": false, 00:22:54.730 "seek_data": false, 00:22:54.730 "copy": false, 00:22:54.730 "nvme_iov_md": false 00:22:54.730 }, 00:22:54.730 "driver_specific": { 00:22:54.730 "ftl": { 00:22:54.730 "base_bdev": "ca5e40c0-353b-4633-86de-186744bb609d", 00:22:54.730 "cache": "nvc0n1p0" 00:22:54.730 } 00:22:54.730 } 00:22:54.730 } 00:22:54.730 ]' 00:22:54.730 08:43:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:54.730 08:43:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:54.730 08:43:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:54.730 [2024-11-22 08:43:29.750572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.750634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.730 [2024-11-22 08:43:29.750653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.730 [2024-11-22 08:43:29.750670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.730 [2024-11-22 08:43:29.750712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:54.730 [2024-11-22 08:43:29.754987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.755024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.730 [2024-11-22 08:43:29.755046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:22:54.730 [2024-11-22 08:43:29.755056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.730 [2024-11-22 08:43:29.755591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.755615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.730 [2024-11-22 08:43:29.755629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:22:54.730 [2024-11-22 08:43:29.755639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.730 [2024-11-22 08:43:29.758467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.758494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.730 [2024-11-22 08:43:29.758509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:22:54.730 [2024-11-22 08:43:29.758520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.730 [2024-11-22 08:43:29.764192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.764227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.730 [2024-11-22 08:43:29.764258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.643 ms 00:22:54.730 [2024-11-22 08:43:29.764268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.730 [2024-11-22 08:43:29.800892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.730 [2024-11-22 08:43:29.800933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.730 [2024-11-22 08:43:29.800953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.567 ms 00:22:54.730 [2024-11-22 08:43:29.800986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.822944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.822990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.990 [2024-11-22 08:43:29.823007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.906 ms 00:22:54.990 [2024-11-22 08:43:29.823021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.823258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.823272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.990 [2024-11-22 08:43:29.823286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:22:54.990 [2024-11-22 08:43:29.823296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.859252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.859302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.990 [2024-11-22 08:43:29.859319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.978 ms 00:22:54.990 [2024-11-22 08:43:29.859329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.894971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.895009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.990 [2024-11-22 08:43:29.895044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.594 ms 00:22:54.990 [2024-11-22 08:43:29.895053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.930106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.930147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.990 [2024-11-22 08:43:29.930179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.021 ms 00:22:54.990 [2024-11-22 08:43:29.930188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.965222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.990 [2024-11-22 08:43:29.965271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.990 [2024-11-22 08:43:29.965288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.956 ms 00:22:54.990 [2024-11-22 08:43:29.965297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.990 [2024-11-22 08:43:29.965402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.990 [2024-11-22 08:43:29.965419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.990 [2024-11-22 08:43:29.965589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.965986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.991 [2024-11-22 08:43:29.966694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.992 [2024-11-22 08:43:29.966709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:22:54.992 [2024-11-22 08:43:29.966720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:54.992 [2024-11-22 08:43:29.966732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:54.992 [2024-11-22 08:43:29.966741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:54.992 [2024-11-22 08:43:29.966754] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:54.992 [2024-11-22 08:43:29.966767] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.992 [2024-11-22 08:43:29.966780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.992 [2024-11-22 08:43:29.966791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.992 [2024-11-22 08:43:29.966802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.992 [2024-11-22 08:43:29.966811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.992 [2024-11-22 08:43:29.966824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.992 [2024-11-22 08:43:29.966834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.992 [2024-11-22 08:43:29.966847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:22:54.992 [2024-11-22 08:43:29.966857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:29.986521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.992 [2024-11-22 08:43:29.986557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.992 [2024-11-22 08:43:29.986579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.654 ms 00:22:54.992 [2024-11-22 08:43:29.986589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:29.987192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.992 [2024-11-22 08:43:29.987217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.992 [2024-11-22 08:43:29.987231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:22:54.992 [2024-11-22 08:43:29.987241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:30.056216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.992 [2024-11-22 08:43:30.056260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.992 [2024-11-22 08:43:30.056291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.992 [2024-11-22 08:43:30.056302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:30.056408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.992 [2024-11-22 08:43:30.056421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.992 [2024-11-22 08:43:30.056434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.992 [2024-11-22 08:43:30.056444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:30.056514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.992 [2024-11-22 08:43:30.056527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.992 [2024-11-22 08:43:30.056545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.992 [2024-11-22 08:43:30.056555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.992 [2024-11-22 08:43:30.056588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.992 [2024-11-22 08:43:30.056599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.992 [2024-11-22 08:43:30.056612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.992 [2024-11-22 08:43:30.056622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.186973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.187025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.252 [2024-11-22 08:43:30.187042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.187054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.286917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.252 [2024-11-22 08:43:30.287019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.252 [2024-11-22 08:43:30.287196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.252 [2024-11-22 08:43:30.287289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.252 [2024-11-22 08:43:30.287472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.252 [2024-11-22 08:43:30.287571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.252 [2024-11-22 08:43:30.287669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.252 [2024-11-22 08:43:30.287751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.252 [2024-11-22 08:43:30.287763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.252 [2024-11-22 08:43:30.287773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.252 [2024-11-22 08:43:30.287977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.243 ms, result 0 00:22:55.252 true 00:22:55.252 08:43:30 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78005 00:22:55.252 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78005 ']' 00:22:55.252 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78005 00:22:55.252 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:55.252 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.252 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78005 00:22:55.512 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.512 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.512 killing process with pid 78005 00:22:55.512 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78005' 00:22:55.512 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78005 00:22:55.512 08:43:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78005 00:23:00.779 08:43:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:01.347 65536+0 records in 00:23:01.347 65536+0 records out 00:23:01.347 268435456 bytes (268 MB, 256 MiB) copied, 0.960815 s, 279 MB/s 00:23:01.347 08:43:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:01.347 [2024-11-22 08:43:36.251454] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:01.348 [2024-11-22 08:43:36.251578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78210 ] 00:23:01.607 [2024-11-22 08:43:36.431073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.607 [2024-11-22 08:43:36.538876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.866 [2024-11-22 08:43:36.874786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:01.866 [2024-11-22 08:43:36.874875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.127 [2024-11-22 08:43:37.036496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.036549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.127 [2024-11-22 08:43:37.036563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.127 [2024-11-22 08:43:37.036573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.039880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.039922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.127 [2024-11-22 08:43:37.039951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:23:02.127 [2024-11-22 08:43:37.039961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.040085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.127 [2024-11-22 08:43:37.041068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.127 [2024-11-22 08:43:37.041102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.041113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.127 [2024-11-22 08:43:37.041124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:23:02.127 [2024-11-22 08:43:37.041134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.042625] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.127 [2024-11-22 08:43:37.061952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.062000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.127 [2024-11-22 08:43:37.062014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.360 ms 00:23:02.127 [2024-11-22 08:43:37.062038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.062142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.062157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.127 [2024-11-22 08:43:37.062169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:02.127 [2024-11-22 08:43:37.062179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.068918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.068948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.127 [2024-11-22 08:43:37.068968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.709 ms 00:23:02.127 [2024-11-22 08:43:37.068978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.069076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.069091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.127 [2024-11-22 08:43:37.069102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:02.127 [2024-11-22 08:43:37.069112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.069140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.069156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.127 [2024-11-22 08:43:37.069166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.127 [2024-11-22 08:43:37.069176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.069199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:02.127 [2024-11-22 08:43:37.074075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.074109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.127 [2024-11-22 08:43:37.074122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.889 ms 00:23:02.127 [2024-11-22 08:43:37.074132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.074200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.074212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.127 [2024-11-22 08:43:37.074224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.127 [2024-11-22 08:43:37.074233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.074253] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.127 [2024-11-22 08:43:37.074278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.127 [2024-11-22 08:43:37.074312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.127 [2024-11-22 08:43:37.074330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:02.127 [2024-11-22 08:43:37.074420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.127 [2024-11-22 08:43:37.074433] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.127 [2024-11-22 08:43:37.074445] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.127 [2024-11-22 08:43:37.074458] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.127 [2024-11-22 08:43:37.074473] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.127 [2024-11-22 08:43:37.074484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:02.127 [2024-11-22 08:43:37.074494] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.127 [2024-11-22 08:43:37.074504] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.127 [2024-11-22 08:43:37.074514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.127 [2024-11-22 08:43:37.074525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.074535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.127 [2024-11-22 08:43:37.074545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:02.127 [2024-11-22 08:43:37.074555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.074638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.127 [2024-11-22 08:43:37.074649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.127 [2024-11-22 08:43:37.074663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:02.127 [2024-11-22 08:43:37.074673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.127 [2024-11-22 08:43:37.074764] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.127 [2024-11-22 08:43:37.074780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.127 [2024-11-22 08:43:37.074800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.127 [2024-11-22 08:43:37.074811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.127 [2024-11-22 08:43:37.074822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.127 [2024-11-22 08:43:37.074831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.127 [2024-11-22 08:43:37.074841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:02.127 [2024-11-22 08:43:37.074850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.127 [2024-11-22 08:43:37.074860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:02.127 [2024-11-22 08:43:37.074869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.127 [2024-11-22 08:43:37.074879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.127 [2024-11-22 08:43:37.074888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:02.128 [2024-11-22 08:43:37.074897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.128 [2024-11-22 08:43:37.074916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.128 [2024-11-22 08:43:37.074926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:02.128 [2024-11-22 08:43:37.074936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.128 [2024-11-22 08:43:37.074945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.128 [2024-11-22 08:43:37.074967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:02.128 [2024-11-22 08:43:37.074976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.128 [2024-11-22 08:43:37.074986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.128 [2024-11-22 08:43:37.074995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.128 [2024-11-22 08:43:37.075022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.128 [2024-11-22 08:43:37.075050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.128 [2024-11-22 08:43:37.075078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.128 [2024-11-22 08:43:37.075105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.128 [2024-11-22 08:43:37.075122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.128 [2024-11-22 08:43:37.075132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:02.128 [2024-11-22 08:43:37.075141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.128 [2024-11-22 08:43:37.075150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.128 [2024-11-22 08:43:37.075159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:02.128 [2024-11-22 08:43:37.075167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.128 [2024-11-22 08:43:37.075185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:02.128 [2024-11-22 08:43:37.075196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.128 [2024-11-22 08:43:37.075215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.128 [2024-11-22 08:43:37.075225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.128 [2024-11-22 08:43:37.075249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.128 [2024-11-22 08:43:37.075259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.128 [2024-11-22 08:43:37.075268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.128 [2024-11-22 08:43:37.075277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.128 [2024-11-22 08:43:37.075286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.128 [2024-11-22 08:43:37.075295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.128 [2024-11-22 08:43:37.075305] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.128 [2024-11-22 08:43:37.075318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:02.128 [2024-11-22 08:43:37.075340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:02.128 [2024-11-22 08:43:37.075351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:02.128 [2024-11-22 08:43:37.075361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:02.128 [2024-11-22 08:43:37.075371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:02.128 [2024-11-22 08:43:37.075382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:02.128 [2024-11-22 08:43:37.075392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:02.128 [2024-11-22 08:43:37.075402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:02.128 [2024-11-22 08:43:37.075412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:02.128 [2024-11-22 08:43:37.075422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:02.128 [2024-11-22 08:43:37.075473] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.128 [2024-11-22 08:43:37.075484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.128 [2024-11-22 08:43:37.075506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.128 [2024-11-22 08:43:37.075516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.128 [2024-11-22 08:43:37.075526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.128 [2024-11-22 08:43:37.075537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.075547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.128 [2024-11-22 08:43:37.075561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:23:02.128 [2024-11-22 08:43:37.075571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.113775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.113816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.128 [2024-11-22 08:43:37.113829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.212 ms 00:23:02.128 [2024-11-22 08:43:37.113839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.113982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.114001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.128 [2024-11-22 08:43:37.114011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:02.128 [2024-11-22 08:43:37.114021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.164976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.165014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.128 [2024-11-22 08:43:37.165027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.013 ms 00:23:02.128 [2024-11-22 08:43:37.165040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.165143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.165156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.128 [2024-11-22 08:43:37.165168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.128 [2024-11-22 08:43:37.165177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.165624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.165646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.128 [2024-11-22 08:43:37.165658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:23:02.128 [2024-11-22 08:43:37.165673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.165789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.165804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.128 [2024-11-22 08:43:37.165814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:02.128 [2024-11-22 08:43:37.165824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.184972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.185010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.128 [2024-11-22 08:43:37.185038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.156 ms 00:23:02.128 [2024-11-22 08:43:37.185049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.128 [2024-11-22 08:43:37.203397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:02.128 [2024-11-22 08:43:37.203441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.128 [2024-11-22 08:43:37.203456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.128 [2024-11-22 08:43:37.203466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.128 [2024-11-22 08:43:37.203492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:23:02.128 [2024-11-22 08:43:37.203502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.231679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.388 [2024-11-22 08:43:37.231720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.388 [2024-11-22 08:43:37.231744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.142 ms 00:23:02.388 [2024-11-22 08:43:37.231771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.248848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.388 [2024-11-22 08:43:37.248888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.388 [2024-11-22 08:43:37.248916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:23:02.388 [2024-11-22 08:43:37.248926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.265940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.388 [2024-11-22 08:43:37.265983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.388 [2024-11-22 08:43:37.265996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.953 ms 00:23:02.388 [2024-11-22 08:43:37.266005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.266795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.388 [2024-11-22 08:43:37.266825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.388 [2024-11-22 08:43:37.266837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:23:02.388 [2024-11-22 08:43:37.266848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.348004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.388 [2024-11-22 08:43:37.348084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.388 [2024-11-22 08:43:37.348101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.259 ms 00:23:02.388 [2024-11-22 08:43:37.348112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.388 [2024-11-22 08:43:37.358228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:02.388 [2024-11-22 08:43:37.373410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.373454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.389 [2024-11-22 08:43:37.373467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.249 ms 00:23:02.389 [2024-11-22 08:43:37.373478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.373596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.373612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.389 [2024-11-22 08:43:37.373624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.389 [2024-11-22 08:43:37.373635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.373686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.373697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.389 [2024-11-22 08:43:37.373707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:02.389 [2024-11-22 08:43:37.373716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.373742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.373752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.389 [2024-11-22 08:43:37.373765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:02.389 [2024-11-22 08:43:37.373775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.373811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.389 [2024-11-22 08:43:37.373824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.373834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.389 [2024-11-22 08:43:37.373844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:02.389 [2024-11-22 08:43:37.373869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.408581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.408631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.389 [2024-11-22 08:43:37.408660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.747 ms 00:23:02.389 [2024-11-22 08:43:37.408671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.408785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.389 [2024-11-22 08:43:37.408799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.389 [2024-11-22 08:43:37.408810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:02.389 [2024-11-22 08:43:37.408820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.389 [2024-11-22 08:43:37.409824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.389 [2024-11-22 08:43:37.413981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.619 ms, result 0 00:23:02.389 [2024-11-22 08:43:37.414940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.389 [2024-11-22 08:43:37.432476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:03.766  [2024-11-22T08:43:39.789Z] Copying: 21/256 [MB] (21 MBps) [2024-11-22T08:43:40.727Z] Copying: 44/256 [MB] (22 MBps) [2024-11-22T08:43:41.665Z] Copying: 66/256 [MB] (22 MBps) [2024-11-22T08:43:42.602Z] Copying: 89/256 [MB] (23 MBps) [2024-11-22T08:43:43.539Z] Copying: 113/256 [MB] (23 MBps) [2024-11-22T08:43:44.475Z] Copying: 137/256 [MB] (23 MBps) [2024-11-22T08:43:45.854Z] Copying: 160/256 [MB] (23 MBps) [2024-11-22T08:43:46.790Z] Copying: 184/256 [MB] (23 MBps) [2024-11-22T08:43:47.728Z] Copying: 208/256 [MB] (23 MBps) [2024-11-22T08:43:48.665Z] Copying: 231/256 [MB] (23 MBps) [2024-11-22T08:43:48.665Z] Copying: 254/256 [MB] (23 MBps) [2024-11-22T08:43:48.665Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-22 08:43:48.466016] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:13.578 [2024-11-22 08:43:48.480294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.480336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:13.578 [2024-11-22 08:43:48.480367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:13.578 [2024-11-22 08:43:48.480378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.578 [2024-11-22 08:43:48.480400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:13.578 [2024-11-22 08:43:48.484560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.484594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:13.578 [2024-11-22 08:43:48.484605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:23:13.578 [2024-11-22 08:43:48.484614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.578 [2024-11-22 08:43:48.486667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.486704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:13.578 [2024-11-22 08:43:48.486717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:23:13.578 [2024-11-22 08:43:48.486727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.578 [2024-11-22 08:43:48.493267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.493302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:13.578 [2024-11-22 08:43:48.493336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.532 ms 00:23:13.578 [2024-11-22 08:43:48.493346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.578 [2024-11-22 08:43:48.498679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.498710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:13.578 [2024-11-22 08:43:48.498721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:23:13.578 [2024-11-22 08:43:48.498732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.578 [2024-11-22 08:43:48.532788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.578 [2024-11-22 08:43:48.532828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:13.579 [2024-11-22 08:43:48.532857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.050 ms 00:23:13.579 [2024-11-22 08:43:48.532867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.579 [2024-11-22 08:43:48.553191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.579 [2024-11-22 08:43:48.553233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:13.579 [2024-11-22 08:43:48.553253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.302 ms 00:23:13.579 [2024-11-22 08:43:48.553266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.579 [2024-11-22 08:43:48.553411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.579 [2024-11-22 08:43:48.553424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:13.579 [2024-11-22 08:43:48.553435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:13.579 [2024-11-22 08:43:48.553445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.579 [2024-11-22 08:43:48.587599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.579 [2024-11-22 08:43:48.587637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:13.579 [2024-11-22 08:43:48.587649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.191 ms 00:23:13.579 [2024-11-22 08:43:48.587658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.579 [2024-11-22 08:43:48.621576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.579 [2024-11-22 08:43:48.621610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:13.579 [2024-11-22 08:43:48.621622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.904 ms 00:23:13.579 [2024-11-22 08:43:48.621631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.579 [2024-11-22 08:43:48.654903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.579 [2024-11-22 08:43:48.654937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:13.579 [2024-11-22 08:43:48.654948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.258 ms 00:23:13.579 [2024-11-22 08:43:48.654964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.839 [2024-11-22 08:43:48.687899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.839 [2024-11-22 08:43:48.687937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:13.839 [2024-11-22 08:43:48.687948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.894 ms 00:23:13.839 [2024-11-22 08:43:48.687964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.839 [2024-11-22 08:43:48.688032] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:13.839 [2024-11-22 08:43:48.688054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:13.839 [2024-11-22 08:43:48.688520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.688990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:13.840 [2024-11-22 08:43:48.689124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:13.840 [2024-11-22 08:43:48.689133] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:13.840 [2024-11-22 08:43:48.689144] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:13.840 [2024-11-22 08:43:48.689154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:13.840 [2024-11-22 08:43:48.689164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:13.840 [2024-11-22 08:43:48.689173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:13.840 [2024-11-22 08:43:48.689183] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:13.840 [2024-11-22 08:43:48.689193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:13.840 [2024-11-22 08:43:48.689203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:13.840 [2024-11-22 08:43:48.689212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:13.840 [2024-11-22 08:43:48.689220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:13.840 [2024-11-22 08:43:48.689229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.840 [2024-11-22 08:43:48.689239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:13.840 [2024-11-22 08:43:48.689253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:23:13.840 [2024-11-22 08:43:48.689263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.840 [2024-11-22 08:43:48.708925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.840 [2024-11-22 08:43:48.708986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:13.840 [2024-11-22 08:43:48.708998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:23:13.840 [2024-11-22 08:43:48.709024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.840 [2024-11-22 08:43:48.709591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.840 [2024-11-22 08:43:48.709620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:13.840 [2024-11-22 08:43:48.709631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:23:13.840 [2024-11-22 08:43:48.709641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.840 [2024-11-22 08:43:48.761273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.841 [2024-11-22 08:43:48.761309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.841 [2024-11-22 08:43:48.761337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.841 [2024-11-22 08:43:48.761347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.841 [2024-11-22 08:43:48.761434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.841 [2024-11-22 08:43:48.761449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.841 [2024-11-22 08:43:48.761459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.841 [2024-11-22 08:43:48.761469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.841 [2024-11-22 08:43:48.761519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.841 [2024-11-22 08:43:48.761532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.841 [2024-11-22 08:43:48.761543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.841 [2024-11-22 08:43:48.761553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.841 [2024-11-22 08:43:48.761571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.841 [2024-11-22 08:43:48.761581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.841 [2024-11-22 08:43:48.761595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.841 [2024-11-22 08:43:48.761604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.841 [2024-11-22 08:43:48.878544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.841 [2024-11-22 08:43:48.878596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.841 [2024-11-22 08:43:48.878615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.841 [2024-11-22 08:43:48.878641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.972580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.972630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.100 [2024-11-22 08:43:48.972648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.972658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.972734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.972745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.100 [2024-11-22 08:43:48.972756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.972766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.972795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.972805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.100 [2024-11-22 08:43:48.972815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.972828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.972935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.972947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.100 [2024-11-22 08:43:48.972957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.972967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.973021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.973033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:14.100 [2024-11-22 08:43:48.973043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.973069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.973110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.100 [2024-11-22 08:43:48.973122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.100 [2024-11-22 08:43:48.973132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.100 [2024-11-22 08:43:48.973142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.100 [2024-11-22 08:43:48.973184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.101 [2024-11-22 08:43:48.973196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.101 [2024-11-22 08:43:48.973207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.101 [2024-11-22 08:43:48.973220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.101 [2024-11-22 08:43:48.973355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.852 ms, result 0 00:23:15.477 00:23:15.477 00:23:15.477 08:43:50 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78351 00:23:15.477 08:43:50 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:15.477 08:43:50 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78351 00:23:15.477 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78351 ']' 00:23:15.478 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.478 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.478 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.478 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.478 08:43:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:15.478 [2024-11-22 08:43:50.266396] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:15.478 [2024-11-22 08:43:50.266529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78351 ] 00:23:15.478 [2024-11-22 08:43:50.443299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.478 [2024-11-22 08:43:50.557159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.412 08:43:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.412 08:43:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:16.412 08:43:51 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:16.671 [2024-11-22 08:43:51.583392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.671 [2024-11-22 08:43:51.583469] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.932 [2024-11-22 08:43:51.770784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.770835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:16.932 [2024-11-22 08:43:51.770855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:16.932 [2024-11-22 08:43:51.770882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.774833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.774875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.932 [2024-11-22 08:43:51.774906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.935 ms 00:23:16.932 [2024-11-22 08:43:51.774917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.775041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:16.932 [2024-11-22 08:43:51.776071] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:16.932 [2024-11-22 08:43:51.776107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.776118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.932 [2024-11-22 08:43:51.776131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:23:16.932 [2024-11-22 08:43:51.776141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.777856] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:16.932 [2024-11-22 08:43:51.796933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.797004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:16.932 [2024-11-22 08:43:51.797020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.112 ms 00:23:16.932 [2024-11-22 08:43:51.797046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.797146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.797166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:16.932 [2024-11-22 08:43:51.797177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:16.932 [2024-11-22 08:43:51.797192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.804125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.804184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.932 [2024-11-22 08:43:51.804197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:23:16.932 [2024-11-22 08:43:51.804211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.804331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.804348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.932 [2024-11-22 08:43:51.804359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:16.932 [2024-11-22 08:43:51.804371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.804405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.804418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:16.932 [2024-11-22 08:43:51.804428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:16.932 [2024-11-22 08:43:51.804440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.804465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:16.932 [2024-11-22 08:43:51.809202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.809239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.932 [2024-11-22 08:43:51.809253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:23:16.932 [2024-11-22 08:43:51.809263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.809349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.932 [2024-11-22 08:43:51.809362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:16.932 [2024-11-22 08:43:51.809376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:16.932 [2024-11-22 08:43:51.809389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.932 [2024-11-22 08:43:51.809413] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:16.932 [2024-11-22 08:43:51.809434] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:16.932 [2024-11-22 08:43:51.809479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:16.932 [2024-11-22 08:43:51.809499] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:16.932 [2024-11-22 08:43:51.809590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:16.932 [2024-11-22 08:43:51.809604] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:16.932 [2024-11-22 08:43:51.809621] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:16.932 [2024-11-22 08:43:51.809636] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:16.932 [2024-11-22 08:43:51.809651] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:16.932 [2024-11-22 08:43:51.809662] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:16.932 [2024-11-22 08:43:51.809681] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:16.932 [2024-11-22 08:43:51.809691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:16.933 [2024-11-22 08:43:51.809710] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:16.933 [2024-11-22 08:43:51.809721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.933 [2024-11-22 08:43:51.809736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:16.933 [2024-11-22 08:43:51.809747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:16.933 [2024-11-22 08:43:51.809761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.933 [2024-11-22 08:43:51.809841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.933 [2024-11-22 08:43:51.809871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:16.933 [2024-11-22 08:43:51.809882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:16.933 [2024-11-22 08:43:51.809897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.933 [2024-11-22 08:43:51.810008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:16.933 [2024-11-22 08:43:51.810034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:16.933 [2024-11-22 08:43:51.810046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:16.933 [2024-11-22 08:43:51.810086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:16.933 [2024-11-22 08:43:51.810126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.933 [2024-11-22 08:43:51.810151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:16.933 [2024-11-22 08:43:51.810166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:16.933 [2024-11-22 08:43:51.810175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.933 [2024-11-22 08:43:51.810190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:16.933 [2024-11-22 08:43:51.810200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:16.933 [2024-11-22 08:43:51.810216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:16.933 [2024-11-22 08:43:51.810240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:16.933 [2024-11-22 08:43:51.810284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:16.933 [2024-11-22 08:43:51.810329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:16.933 [2024-11-22 08:43:51.810362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:16.933 [2024-11-22 08:43:51.810400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:16.933 [2024-11-22 08:43:51.810433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.933 [2024-11-22 08:43:51.810458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:16.933 [2024-11-22 08:43:51.810472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:16.933 [2024-11-22 08:43:51.810481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.933 [2024-11-22 08:43:51.810495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:16.933 [2024-11-22 08:43:51.810504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:16.933 [2024-11-22 08:43:51.810523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:16.933 [2024-11-22 08:43:51.810547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:16.933 [2024-11-22 08:43:51.810557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810571] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:16.933 [2024-11-22 08:43:51.810583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:16.933 [2024-11-22 08:43:51.810613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.933 [2024-11-22 08:43:51.810649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:16.933 [2024-11-22 08:43:51.810659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:16.933 [2024-11-22 08:43:51.810673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:16.933 [2024-11-22 08:43:51.810686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:16.933 [2024-11-22 08:43:51.810717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:16.933 [2024-11-22 08:43:51.810727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:16.933 [2024-11-22 08:43:51.810745] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:16.933 [2024-11-22 08:43:51.810757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.810847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:16.933 [2024-11-22 08:43:51.810859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:16.933 [2024-11-22 08:43:51.810928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:16.933 [2024-11-22 08:43:51.810942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:16.933 [2024-11-22 08:43:51.810981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:16.933 [2024-11-22 08:43:51.810993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:16.933 [2024-11-22 08:43:51.811021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:16.933 [2024-11-22 08:43:51.811032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:16.933 [2024-11-22 08:43:51.811062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:16.933 [2024-11-22 08:43:51.811072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:16.933 [2024-11-22 08:43:51.811232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:16.933 [2024-11-22 08:43:51.811245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:16.933 [2024-11-22 08:43:51.811293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:16.934 [2024-11-22 08:43:51.811316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:16.934 [2024-11-22 08:43:51.811330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:16.934 [2024-11-22 08:43:51.811352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.811364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:16.934 [2024-11-22 08:43:51.811394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:23:16.934 [2024-11-22 08:43:51.811405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.853626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.853666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.934 [2024-11-22 08:43:51.853701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.197 ms 00:23:16.934 [2024-11-22 08:43:51.853712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.853841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.853855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:16.934 [2024-11-22 08:43:51.853871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:16.934 [2024-11-22 08:43:51.853881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.900582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.900621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.934 [2024-11-22 08:43:51.900645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.746 ms 00:23:16.934 [2024-11-22 08:43:51.900672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.900771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.900784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.934 [2024-11-22 08:43:51.900799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:16.934 [2024-11-22 08:43:51.900809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.901277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.901299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.934 [2024-11-22 08:43:51.901316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:23:16.934 [2024-11-22 08:43:51.901326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.901445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.901464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.934 [2024-11-22 08:43:51.901477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:16.934 [2024-11-22 08:43:51.901487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.922667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.922705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.934 [2024-11-22 08:43:51.922739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.183 ms 00:23:16.934 [2024-11-22 08:43:51.922749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.941605] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:16.934 [2024-11-22 08:43:51.941663] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:16.934 [2024-11-22 08:43:51.941682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.941693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:16.934 [2024-11-22 08:43:51.941722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.851 ms 00:23:16.934 [2024-11-22 08:43:51.941732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.969280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.969321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:16.934 [2024-11-22 08:43:51.969336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.494 ms 00:23:16.934 [2024-11-22 08:43:51.969346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:51.986554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:51.986591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:16.934 [2024-11-22 08:43:51.986616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:23:16.934 [2024-11-22 08:43:51.986642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:52.003977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:52.004015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:16.934 [2024-11-22 08:43:52.004045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.274 ms 00:23:16.934 [2024-11-22 08:43:52.004055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.934 [2024-11-22 08:43:52.004850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.934 [2024-11-22 08:43:52.004882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.934 [2024-11-22 08:43:52.004900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:23:16.934 [2024-11-22 08:43:52.004910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.111799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.111863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:17.193 [2024-11-22 08:43:52.111900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.026 ms 00:23:17.193 [2024-11-22 08:43:52.111912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.122209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:17.193 [2024-11-22 08:43:52.137491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.137554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:17.193 [2024-11-22 08:43:52.137568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.501 ms 00:23:17.193 [2024-11-22 08:43:52.137598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.137689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.137708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:17.193 [2024-11-22 08:43:52.137720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:17.193 [2024-11-22 08:43:52.137736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.137788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.137805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:17.193 [2024-11-22 08:43:52.137815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:17.193 [2024-11-22 08:43:52.137835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.137859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.137876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:17.193 [2024-11-22 08:43:52.137888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:17.193 [2024-11-22 08:43:52.137902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.137941] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:17.193 [2024-11-22 08:43:52.137997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.138014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:17.193 [2024-11-22 08:43:52.138029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:17.193 [2024-11-22 08:43:52.138040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.172983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.173024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:17.193 [2024-11-22 08:43:52.173043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.959 ms 00:23:17.193 [2024-11-22 08:43:52.173053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.173186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.193 [2024-11-22 08:43:52.173200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:17.193 [2024-11-22 08:43:52.173221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:17.193 [2024-11-22 08:43:52.173232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.193 [2024-11-22 08:43:52.174301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:17.193 [2024-11-22 08:43:52.178321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.802 ms, result 0 00:23:17.193 [2024-11-22 08:43:52.179603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:17.193 Some configs were skipped because the RPC state that can call them passed over. 00:23:17.193 08:43:52 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:17.497 [2024-11-22 08:43:52.418838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.497 [2024-11-22 08:43:52.418903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:17.497 [2024-11-22 08:43:52.418918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:23:17.497 [2024-11-22 08:43:52.418935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.497 [2024-11-22 08:43:52.418985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.827 ms, result 0 00:23:17.497 true 00:23:17.497 08:43:52 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:17.777 [2024-11-22 08:43:52.610387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.777 [2024-11-22 08:43:52.610432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:17.777 [2024-11-22 08:43:52.610454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:23:17.777 [2024-11-22 08:43:52.610465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.777 [2024-11-22 08:43:52.610511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.403 ms, result 0 00:23:17.777 true 00:23:17.777 08:43:52 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78351 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78351 ']' 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78351 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78351 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.777 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.778 killing process with pid 78351 00:23:17.778 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78351' 00:23:17.778 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78351 00:23:17.778 08:43:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78351 00:23:18.723 [2024-11-22 08:43:53.736453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.736532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:18.723 [2024-11-22 08:43:53.736547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:18.723 [2024-11-22 08:43:53.736559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.736584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:18.723 [2024-11-22 08:43:53.740730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.740766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:18.723 [2024-11-22 08:43:53.740784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:23:18.723 [2024-11-22 08:43:53.740794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.741047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.741061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:18.723 [2024-11-22 08:43:53.741074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:18.723 [2024-11-22 08:43:53.741084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.744469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.744508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:18.723 [2024-11-22 08:43:53.744525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:23:18.723 [2024-11-22 08:43:53.744536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.749959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.749997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:18.723 [2024-11-22 08:43:53.750026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:23:18.723 [2024-11-22 08:43:53.750036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.764626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.764664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:18.723 [2024-11-22 08:43:53.764682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.553 ms 00:23:18.723 [2024-11-22 08:43:53.764700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.775160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.775214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:18.723 [2024-11-22 08:43:53.775228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.404 ms 00:23:18.723 [2024-11-22 08:43:53.775239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.775379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.775392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.723 [2024-11-22 08:43:53.775404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:18.723 [2024-11-22 08:43:53.775415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.723 [2024-11-22 08:43:53.790224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.723 [2024-11-22 08:43:53.790259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:18.723 [2024-11-22 08:43:53.790273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.811 ms 00:23:18.723 [2024-11-22 08:43:53.790282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.982 [2024-11-22 08:43:53.804656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.982 [2024-11-22 08:43:53.804692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:18.982 [2024-11-22 08:43:53.804714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.324 ms 00:23:18.982 [2024-11-22 08:43:53.804723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.982 [2024-11-22 08:43:53.818678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.982 [2024-11-22 08:43:53.818715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:18.982 [2024-11-22 08:43:53.818750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:23:18.982 [2024-11-22 08:43:53.818760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.982 [2024-11-22 08:43:53.832668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.982 [2024-11-22 08:43:53.832704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:18.982 [2024-11-22 08:43:53.832722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.847 ms 00:23:18.982 [2024-11-22 08:43:53.832731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.982 [2024-11-22 08:43:53.832815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:18.982 [2024-11-22 08:43:53.832831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:18.982 [2024-11-22 08:43:53.832854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:18.982 [2024-11-22 08:43:53.832866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:18.982 [2024-11-22 08:43:53.832881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.832991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.833992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.834007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.834018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.834033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:18.983 [2024-11-22 08:43:53.834044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:18.984 [2024-11-22 08:43:53.834186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:18.984 [2024-11-22 08:43:53.834205] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:18.984 [2024-11-22 08:43:53.834234] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:18.984 [2024-11-22 08:43:53.834249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:18.984 [2024-11-22 08:43:53.834259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:18.984 [2024-11-22 08:43:53.834275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:18.984 [2024-11-22 08:43:53.834285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:18.984 [2024-11-22 08:43:53.834300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:18.984 [2024-11-22 08:43:53.834310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:18.984 [2024-11-22 08:43:53.834324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:18.984 [2024-11-22 08:43:53.834333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:18.984 [2024-11-22 08:43:53.834347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.984 [2024-11-22 08:43:53.834358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:18.984 [2024-11-22 08:43:53.834374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:23:18.984 [2024-11-22 08:43:53.834389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.853675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.984 [2024-11-22 08:43:53.853710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:18.984 [2024-11-22 08:43:53.853731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.285 ms 00:23:18.984 [2024-11-22 08:43:53.853741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.854422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.984 [2024-11-22 08:43:53.854446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:18.984 [2024-11-22 08:43:53.854468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:23:18.984 [2024-11-22 08:43:53.854478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.920878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.984 [2024-11-22 08:43:53.920917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.984 [2024-11-22 08:43:53.920931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.984 [2024-11-22 08:43:53.920941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.921045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.984 [2024-11-22 08:43:53.921059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.984 [2024-11-22 08:43:53.921075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.984 [2024-11-22 08:43:53.921085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.921135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.984 [2024-11-22 08:43:53.921148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.984 [2024-11-22 08:43:53.921163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.984 [2024-11-22 08:43:53.921173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:53.921193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.984 [2024-11-22 08:43:53.921203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.984 [2024-11-22 08:43:53.921215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.984 [2024-11-22 08:43:53.921227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.984 [2024-11-22 08:43:54.040775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.984 [2024-11-22 08:43:54.040823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.984 [2024-11-22 08:43:54.040859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.984 [2024-11-22 08:43:54.040870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.243 [2024-11-22 08:43:54.139633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.243 [2024-11-22 08:43:54.139685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.244 [2024-11-22 08:43:54.139705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.139721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.139826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.139838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.244 [2024-11-22 08:43:54.139858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.139869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.139903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.139914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.244 [2024-11-22 08:43:54.139929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.139939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.140091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.140105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.244 [2024-11-22 08:43:54.140121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.140131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.140177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.140189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:19.244 [2024-11-22 08:43:54.140206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.140216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.140264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.140276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.244 [2024-11-22 08:43:54.140296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.140306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.140352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.244 [2024-11-22 08:43:54.140364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.244 [2024-11-22 08:43:54.140379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.244 [2024-11-22 08:43:54.140390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.244 [2024-11-22 08:43:54.140536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.711 ms, result 0 00:23:20.180 08:43:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:20.180 08:43:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.180 [2024-11-22 08:43:55.190683] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:20.180 [2024-11-22 08:43:55.190799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78415 ] 00:23:20.439 [2024-11-22 08:43:55.372941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.439 [2024-11-22 08:43:55.480632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.009 [2024-11-22 08:43:55.821597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.009 [2024-11-22 08:43:55.821663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.009 [2024-11-22 08:43:55.982552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.009 [2024-11-22 08:43:55.982603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.009 [2024-11-22 08:43:55.982644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.009 [2024-11-22 08:43:55.982654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:55.985692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:55.985731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.010 [2024-11-22 08:43:55.985743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:23:21.010 [2024-11-22 08:43:55.985754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:55.985864] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.010 [2024-11-22 08:43:55.986973] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.010 [2024-11-22 08:43:55.987006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:55.987017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.010 [2024-11-22 08:43:55.987029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:23:21.010 [2024-11-22 08:43:55.987039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:55.988621] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.010 [2024-11-22 08:43:56.006895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.006940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.010 [2024-11-22 08:43:56.006976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.304 ms 00:23:21.010 [2024-11-22 08:43:56.006987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.007084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.007099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.010 [2024-11-22 08:43:56.007110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:21.010 [2024-11-22 08:43:56.007119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.014003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.014030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.010 [2024-11-22 08:43:56.014056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:23:21.010 [2024-11-22 08:43:56.014067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.014160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.014174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.010 [2024-11-22 08:43:56.014185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:21.010 [2024-11-22 08:43:56.014194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.014221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.014234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.010 [2024-11-22 08:43:56.014244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.010 [2024-11-22 08:43:56.014254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.014276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:21.010 [2024-11-22 08:43:56.018849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.018884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.010 [2024-11-22 08:43:56.018895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:23:21.010 [2024-11-22 08:43:56.018905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.018997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.019010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.010 [2024-11-22 08:43:56.019021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:21.010 [2024-11-22 08:43:56.019031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.019054] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.010 [2024-11-22 08:43:56.019080] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.010 [2024-11-22 08:43:56.019113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.010 [2024-11-22 08:43:56.019131] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:21.010 [2024-11-22 08:43:56.019218] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.010 [2024-11-22 08:43:56.019231] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.010 [2024-11-22 08:43:56.019244] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:21.010 [2024-11-22 08:43:56.019257] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019283] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:21.010 [2024-11-22 08:43:56.019293] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.010 [2024-11-22 08:43:56.019303] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.010 [2024-11-22 08:43:56.019313] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.010 [2024-11-22 08:43:56.019323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.019333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.010 [2024-11-22 08:43:56.019343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:23:21.010 [2024-11-22 08:43:56.019353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.019428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.010 [2024-11-22 08:43:56.019439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.010 [2024-11-22 08:43:56.019452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:21.010 [2024-11-22 08:43:56.019462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.010 [2024-11-22 08:43:56.019551] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.010 [2024-11-22 08:43:56.019569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.010 [2024-11-22 08:43:56.019580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.010 [2024-11-22 08:43:56.019609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.010 [2024-11-22 08:43:56.019638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.010 [2024-11-22 08:43:56.019656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.010 [2024-11-22 08:43:56.019666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:21.010 [2024-11-22 08:43:56.019675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.010 [2024-11-22 08:43:56.019696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.010 [2024-11-22 08:43:56.019706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:21.010 [2024-11-22 08:43:56.019715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.010 [2024-11-22 08:43:56.019733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.010 [2024-11-22 08:43:56.019761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.010 [2024-11-22 08:43:56.019789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.010 [2024-11-22 08:43:56.019815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.010 [2024-11-22 08:43:56.019842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.010 [2024-11-22 08:43:56.019860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.010 [2024-11-22 08:43:56.019869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.010 [2024-11-22 08:43:56.019886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.010 [2024-11-22 08:43:56.019895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:21.010 [2024-11-22 08:43:56.019904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.010 [2024-11-22 08:43:56.019913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.010 [2024-11-22 08:43:56.019922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:21.010 [2024-11-22 08:43:56.019930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.010 [2024-11-22 08:43:56.019948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:21.010 [2024-11-22 08:43:56.019968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.010 [2024-11-22 08:43:56.019978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.010 [2024-11-22 08:43:56.019988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.011 [2024-11-22 08:43:56.019998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.011 [2024-11-22 08:43:56.020011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.011 [2024-11-22 08:43:56.020021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.011 [2024-11-22 08:43:56.020031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.011 [2024-11-22 08:43:56.020041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.011 [2024-11-22 08:43:56.020050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.011 [2024-11-22 08:43:56.020058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.011 [2024-11-22 08:43:56.020067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.011 [2024-11-22 08:43:56.020078] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.011 [2024-11-22 08:43:56.020090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:21.011 [2024-11-22 08:43:56.020112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:21.011 [2024-11-22 08:43:56.020122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:21.011 [2024-11-22 08:43:56.020132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:21.011 [2024-11-22 08:43:56.020142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:21.011 [2024-11-22 08:43:56.020152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:21.011 [2024-11-22 08:43:56.020162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:21.011 [2024-11-22 08:43:56.020172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:21.011 [2024-11-22 08:43:56.020182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:21.011 [2024-11-22 08:43:56.020192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:21.011 [2024-11-22 08:43:56.020243] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.011 [2024-11-22 08:43:56.020254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.011 [2024-11-22 08:43:56.020275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.011 [2024-11-22 08:43:56.020285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.011 [2024-11-22 08:43:56.020295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.011 [2024-11-22 08:43:56.020306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.011 [2024-11-22 08:43:56.020317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.011 [2024-11-22 08:43:56.020331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:23:21.011 [2024-11-22 08:43:56.020340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.011 [2024-11-22 08:43:56.058471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.011 [2024-11-22 08:43:56.058511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.011 [2024-11-22 08:43:56.058524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.139 ms 00:23:21.011 [2024-11-22 08:43:56.058534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.011 [2024-11-22 08:43:56.058675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.011 [2024-11-22 08:43:56.058694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.011 [2024-11-22 08:43:56.058705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:21.011 [2024-11-22 08:43:56.058716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.113745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.113784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.271 [2024-11-22 08:43:56.113796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.096 ms 00:23:21.271 [2024-11-22 08:43:56.113810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.113912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.113924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.271 [2024-11-22 08:43:56.113935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:21.271 [2024-11-22 08:43:56.113944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.114409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.114430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.271 [2024-11-22 08:43:56.114441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:23:21.271 [2024-11-22 08:43:56.114457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.114574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.114588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.271 [2024-11-22 08:43:56.114598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:21.271 [2024-11-22 08:43:56.114618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.134443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.134482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.271 [2024-11-22 08:43:56.134495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.834 ms 00:23:21.271 [2024-11-22 08:43:56.134506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.153272] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:21.271 [2024-11-22 08:43:56.153313] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.271 [2024-11-22 08:43:56.153327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.153337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.271 [2024-11-22 08:43:56.153363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.726 ms 00:23:21.271 [2024-11-22 08:43:56.153373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.181745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.181796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.271 [2024-11-22 08:43:56.181824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.337 ms 00:23:21.271 [2024-11-22 08:43:56.181834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.199422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.199473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.271 [2024-11-22 08:43:56.199486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.536 ms 00:23:21.271 [2024-11-22 08:43:56.199497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.217342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.217381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.271 [2024-11-22 08:43:56.217393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.784 ms 00:23:21.271 [2024-11-22 08:43:56.217403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.218173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.218204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.271 [2024-11-22 08:43:56.218216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:23:21.271 [2024-11-22 08:43:56.218226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.300251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.300316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.271 [2024-11-22 08:43:56.300332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.130 ms 00:23:21.271 [2024-11-22 08:43:56.300358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.311366] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:21.271 [2024-11-22 08:43:56.327374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.327422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.271 [2024-11-22 08:43:56.327438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.964 ms 00:23:21.271 [2024-11-22 08:43:56.327448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.327576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.327589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.271 [2024-11-22 08:43:56.327600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.271 [2024-11-22 08:43:56.327610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.327665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.327677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.271 [2024-11-22 08:43:56.327688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:21.271 [2024-11-22 08:43:56.327698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.327724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.327738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.271 [2024-11-22 08:43:56.327748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.271 [2024-11-22 08:43:56.327758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.271 [2024-11-22 08:43:56.327795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.271 [2024-11-22 08:43:56.327807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.271 [2024-11-22 08:43:56.327817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.271 [2024-11-22 08:43:56.327828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:21.271 [2024-11-22 08:43:56.327838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.531 [2024-11-22 08:43:56.364328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.531 [2024-11-22 08:43:56.364373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.531 [2024-11-22 08:43:56.364387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.526 ms 00:23:21.531 [2024-11-22 08:43:56.364398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.531 [2024-11-22 08:43:56.364534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.531 [2024-11-22 08:43:56.364548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.531 [2024-11-22 08:43:56.364559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:21.531 [2024-11-22 08:43:56.364569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.531 [2024-11-22 08:43:56.365760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.531 [2024-11-22 08:43:56.370125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.469 ms, result 0 00:23:21.531 [2024-11-22 08:43:56.370915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.531 [2024-11-22 08:43:56.389104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.467  [2024-11-22T08:43:58.491Z] Copying: 27/256 [MB] (27 MBps) [2024-11-22T08:43:59.426Z] Copying: 49/256 [MB] (22 MBps) [2024-11-22T08:44:00.804Z] Copying: 75/256 [MB] (25 MBps) [2024-11-22T08:44:01.743Z] Copying: 100/256 [MB] (25 MBps) [2024-11-22T08:44:02.680Z] Copying: 125/256 [MB] (25 MBps) [2024-11-22T08:44:03.618Z] Copying: 149/256 [MB] (24 MBps) [2024-11-22T08:44:04.556Z] Copying: 173/256 [MB] (24 MBps) [2024-11-22T08:44:05.494Z] Copying: 198/256 [MB] (24 MBps) [2024-11-22T08:44:06.433Z] Copying: 222/256 [MB] (24 MBps) [2024-11-22T08:44:07.047Z] Copying: 247/256 [MB] (24 MBps) [2024-11-22T08:44:07.047Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-22 08:44:06.740363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.960 [2024-11-22 08:44:06.754624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.754670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.960 [2024-11-22 08:44:06.754701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:31.960 [2024-11-22 08:44:06.754723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.754748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:31.960 [2024-11-22 08:44:06.758938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.758975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.960 [2024-11-22 08:44:06.759003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:23:31.960 [2024-11-22 08:44:06.759014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.759240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.759254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.960 [2024-11-22 08:44:06.759265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:23:31.960 [2024-11-22 08:44:06.759274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.762117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.762155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.960 [2024-11-22 08:44:06.762165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:23:31.960 [2024-11-22 08:44:06.762191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.767661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.767695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:31.960 [2024-11-22 08:44:06.767707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.460 ms 00:23:31.960 [2024-11-22 08:44:06.767716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.802780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.802819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.960 [2024-11-22 08:44:06.802848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.054 ms 00:23:31.960 [2024-11-22 08:44:06.802858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.823605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.823653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.960 [2024-11-22 08:44:06.823682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:23:31.960 [2024-11-22 08:44:06.823700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.823856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.823870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:31.960 [2024-11-22 08:44:06.823881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:31.960 [2024-11-22 08:44:06.823890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.859762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.859805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:31.960 [2024-11-22 08:44:06.859819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.897 ms 00:23:31.960 [2024-11-22 08:44:06.859829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.895885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.895928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:31.960 [2024-11-22 08:44:06.895957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.056 ms 00:23:31.960 [2024-11-22 08:44:06.895975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.931876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.931918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:31.960 [2024-11-22 08:44:06.931932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.899 ms 00:23:31.960 [2024-11-22 08:44:06.931942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.967595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.960 [2024-11-22 08:44:06.967638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:31.960 [2024-11-22 08:44:06.967651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.544 ms 00:23:31.960 [2024-11-22 08:44:06.967661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.960 [2024-11-22 08:44:06.967718] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:31.960 [2024-11-22 08:44:06.967735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.967990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.968001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:31.960 [2024-11-22 08:44:06.968012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:31.961 [2024-11-22 08:44:06.968823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:31.961 [2024-11-22 08:44:06.968833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:31.961 [2024-11-22 08:44:06.968844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:31.961 [2024-11-22 08:44:06.968853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:31.961 [2024-11-22 08:44:06.968864] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:31.961 [2024-11-22 08:44:06.968874] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:31.961 [2024-11-22 08:44:06.968884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:31.961 [2024-11-22 08:44:06.968895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:31.961 [2024-11-22 08:44:06.968905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:31.961 [2024-11-22 08:44:06.968914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:31.961 [2024-11-22 08:44:06.968923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:31.961 [2024-11-22 08:44:06.968932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.962 [2024-11-22 08:44:06.968950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:31.962 [2024-11-22 08:44:06.968970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:23:31.962 [2024-11-22 08:44:06.968980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.962 [2024-11-22 08:44:06.988706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.962 [2024-11-22 08:44:06.988742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:31.962 [2024-11-22 08:44:06.988755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.737 ms 00:23:31.962 [2024-11-22 08:44:06.988765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.962 [2024-11-22 08:44:06.989346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.962 [2024-11-22 08:44:06.989358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:31.962 [2024-11-22 08:44:06.989370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:31.962 [2024-11-22 08:44:06.989380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.251 [2024-11-22 08:44:07.044843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.251 [2024-11-22 08:44:07.044887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.251 [2024-11-22 08:44:07.044901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.251 [2024-11-22 08:44:07.044910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.045066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.045080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.252 [2024-11-22 08:44:07.045102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.045113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.045166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.045179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.252 [2024-11-22 08:44:07.045189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.045198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.045217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.045234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.252 [2024-11-22 08:44:07.045260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.045270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.165150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.165229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.252 [2024-11-22 08:44:07.165243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.165254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.252 [2024-11-22 08:44:07.262329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.252 [2024-11-22 08:44:07.262433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.252 [2024-11-22 08:44:07.262500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.252 [2024-11-22 08:44:07.262641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:32.252 [2024-11-22 08:44:07.262710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.252 [2024-11-22 08:44:07.262804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.262858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.252 [2024-11-22 08:44:07.262870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.252 [2024-11-22 08:44:07.262887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.252 [2024-11-22 08:44:07.262896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.252 [2024-11-22 08:44:07.263066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 509.258 ms, result 0 00:23:33.191 00:23:33.191 00:23:33.191 08:44:08 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:33.191 08:44:08 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:33.759 08:44:08 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:33.759 [2024-11-22 08:44:08.777038] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:33.759 [2024-11-22 08:44:08.777149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78560 ] 00:23:34.018 [2024-11-22 08:44:08.954282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.018 [2024-11-22 08:44:09.064250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.587 [2024-11-22 08:44:09.412711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.587 [2024-11-22 08:44:09.412793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.587 [2024-11-22 08:44:09.574189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.574245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.587 [2024-11-22 08:44:09.574276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.587 [2024-11-22 08:44:09.574287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.577253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.577291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.587 [2024-11-22 08:44:09.577304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:23:34.587 [2024-11-22 08:44:09.577314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.577424] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.587 [2024-11-22 08:44:09.578425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.587 [2024-11-22 08:44:09.578462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.578474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.587 [2024-11-22 08:44:09.578485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:23:34.587 [2024-11-22 08:44:09.578495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.580009] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.587 [2024-11-22 08:44:09.598371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.598416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.587 [2024-11-22 08:44:09.598445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.393 ms 00:23:34.587 [2024-11-22 08:44:09.598456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.598555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.598569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.587 [2024-11-22 08:44:09.598581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:34.587 [2024-11-22 08:44:09.598591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.605369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.605398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.587 [2024-11-22 08:44:09.605425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.741 ms 00:23:34.587 [2024-11-22 08:44:09.605434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.605530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.605544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.587 [2024-11-22 08:44:09.605555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:34.587 [2024-11-22 08:44:09.605564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.605593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.605607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.587 [2024-11-22 08:44:09.605617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:34.587 [2024-11-22 08:44:09.605626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.605649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:34.587 [2024-11-22 08:44:09.610364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.610394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.587 [2024-11-22 08:44:09.610406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:23:34.587 [2024-11-22 08:44:09.610416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.610499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.587 [2024-11-22 08:44:09.610512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.587 [2024-11-22 08:44:09.610523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:34.587 [2024-11-22 08:44:09.610534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.587 [2024-11-22 08:44:09.610554] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.587 [2024-11-22 08:44:09.610580] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:34.588 [2024-11-22 08:44:09.610623] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.588 [2024-11-22 08:44:09.610642] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:34.588 [2024-11-22 08:44:09.610731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:34.588 [2024-11-22 08:44:09.610744] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.588 [2024-11-22 08:44:09.610757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:34.588 [2024-11-22 08:44:09.610770] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.588 [2024-11-22 08:44:09.610785] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.588 [2024-11-22 08:44:09.610797] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:34.588 [2024-11-22 08:44:09.610806] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.588 [2024-11-22 08:44:09.610816] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:34.588 [2024-11-22 08:44:09.610826] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:34.588 [2024-11-22 08:44:09.610836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.588 [2024-11-22 08:44:09.610847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.588 [2024-11-22 08:44:09.610857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:34.588 [2024-11-22 08:44:09.610867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.588 [2024-11-22 08:44:09.610943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.588 [2024-11-22 08:44:09.610968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.588 [2024-11-22 08:44:09.610982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:34.588 [2024-11-22 08:44:09.610992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.588 [2024-11-22 08:44:09.611085] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.588 [2024-11-22 08:44:09.611097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.588 [2024-11-22 08:44:09.611108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.588 [2024-11-22 08:44:09.611138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.588 [2024-11-22 08:44:09.611167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.588 [2024-11-22 08:44:09.611186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.588 [2024-11-22 08:44:09.611195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:34.588 [2024-11-22 08:44:09.611204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.588 [2024-11-22 08:44:09.611225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.588 [2024-11-22 08:44:09.611236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:34.588 [2024-11-22 08:44:09.611245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.588 [2024-11-22 08:44:09.611264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.588 [2024-11-22 08:44:09.611291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.588 [2024-11-22 08:44:09.611317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.588 [2024-11-22 08:44:09.611344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.588 [2024-11-22 08:44:09.611371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.588 [2024-11-22 08:44:09.611398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.588 [2024-11-22 08:44:09.611415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.588 [2024-11-22 08:44:09.611424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:34.588 [2024-11-22 08:44:09.611433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.588 [2024-11-22 08:44:09.611442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:34.588 [2024-11-22 08:44:09.611451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:34.588 [2024-11-22 08:44:09.611460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:34.588 [2024-11-22 08:44:09.611477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:34.588 [2024-11-22 08:44:09.611488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611497] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.588 [2024-11-22 08:44:09.611507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.588 [2024-11-22 08:44:09.611516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.588 [2024-11-22 08:44:09.611540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.588 [2024-11-22 08:44:09.611549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.588 [2024-11-22 08:44:09.611559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.588 [2024-11-22 08:44:09.611568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.588 [2024-11-22 08:44:09.611577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.588 [2024-11-22 08:44:09.611587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.588 [2024-11-22 08:44:09.611597] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.588 [2024-11-22 08:44:09.611609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.588 [2024-11-22 08:44:09.611620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:34.588 [2024-11-22 08:44:09.611630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:34.589 [2024-11-22 08:44:09.611640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:34.589 [2024-11-22 08:44:09.611650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:34.589 [2024-11-22 08:44:09.611660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:34.589 [2024-11-22 08:44:09.611670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:34.589 [2024-11-22 08:44:09.611680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:34.589 [2024-11-22 08:44:09.611690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:34.589 [2024-11-22 08:44:09.611700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:34.589 [2024-11-22 08:44:09.611710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:34.589 [2024-11-22 08:44:09.611760] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.589 [2024-11-22 08:44:09.611771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.589 [2024-11-22 08:44:09.611792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.589 [2024-11-22 08:44:09.611802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.589 [2024-11-22 08:44:09.611814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.589 [2024-11-22 08:44:09.611824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.589 [2024-11-22 08:44:09.611834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.589 [2024-11-22 08:44:09.611848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:23:34.589 [2024-11-22 08:44:09.611858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.589 [2024-11-22 08:44:09.651642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.589 [2024-11-22 08:44:09.651756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.589 [2024-11-22 08:44:09.651771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.784 ms 00:23:34.589 [2024-11-22 08:44:09.651797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.589 [2024-11-22 08:44:09.652142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.589 [2024-11-22 08:44:09.652166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.589 [2024-11-22 08:44:09.652177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:34.589 [2024-11-22 08:44:09.652186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.714378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.714422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.850 [2024-11-22 08:44:09.714437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.263 ms 00:23:34.850 [2024-11-22 08:44:09.714451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.714573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.714586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.850 [2024-11-22 08:44:09.714600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:34.850 [2024-11-22 08:44:09.714617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.715091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.715113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.850 [2024-11-22 08:44:09.715127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:23:34.850 [2024-11-22 08:44:09.715144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.715267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.715287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.850 [2024-11-22 08:44:09.715298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:34.850 [2024-11-22 08:44:09.715308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.734933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.734981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.850 [2024-11-22 08:44:09.734995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.627 ms 00:23:34.850 [2024-11-22 08:44:09.735005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.753924] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:34.850 [2024-11-22 08:44:09.753985] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.850 [2024-11-22 08:44:09.754001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.754015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.850 [2024-11-22 08:44:09.754027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.882 ms 00:23:34.850 [2024-11-22 08:44:09.754039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.782649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.782698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.850 [2024-11-22 08:44:09.782728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.572 ms 00:23:34.850 [2024-11-22 08:44:09.782738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.800946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.800991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.850 [2024-11-22 08:44:09.801020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.152 ms 00:23:34.850 [2024-11-22 08:44:09.801029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.818382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.818420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.850 [2024-11-22 08:44:09.818447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:23:34.850 [2024-11-22 08:44:09.818457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.819278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.819314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.850 [2024-11-22 08:44:09.819326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:23:34.850 [2024-11-22 08:44:09.819337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.901479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.901541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.850 [2024-11-22 08:44:09.901557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.245 ms 00:23:34.850 [2024-11-22 08:44:09.901567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.911910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:34.850 [2024-11-22 08:44:09.927280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.927326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.850 [2024-11-22 08:44:09.927341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.640 ms 00:23:34.850 [2024-11-22 08:44:09.927351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.927479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.927493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.850 [2024-11-22 08:44:09.927505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:34.850 [2024-11-22 08:44:09.927514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.927592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.927611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.850 [2024-11-22 08:44:09.927622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:34.850 [2024-11-22 08:44:09.927631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.927664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.927678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.850 [2024-11-22 08:44:09.927688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.850 [2024-11-22 08:44:09.927697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.850 [2024-11-22 08:44:09.927761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.850 [2024-11-22 08:44:09.927776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.850 [2024-11-22 08:44:09.927786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.850 [2024-11-22 08:44:09.927796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:34.850 [2024-11-22 08:44:09.927806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.110 [2024-11-22 08:44:09.963043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.110 [2024-11-22 08:44:09.963086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:35.110 [2024-11-22 08:44:09.963115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.272 ms 00:23:35.110 [2024-11-22 08:44:09.963126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.110 [2024-11-22 08:44:09.963240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.110 [2024-11-22 08:44:09.963253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:35.110 [2024-11-22 08:44:09.963269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:35.110 [2024-11-22 08:44:09.963279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.110 [2024-11-22 08:44:09.964329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.110 [2024-11-22 08:44:09.968354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.359 ms, result 0 00:23:35.110 [2024-11-22 08:44:09.969349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:35.110 [2024-11-22 08:44:09.987477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.110  [2024-11-22T08:44:10.197Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-11-22 08:44:10.157449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:35.110 [2024-11-22 08:44:10.170896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.110 [2024-11-22 08:44:10.170936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:35.111 [2024-11-22 08:44:10.170948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:35.111 [2024-11-22 08:44:10.170989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.111 [2024-11-22 08:44:10.171010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:35.111 [2024-11-22 08:44:10.174929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.111 [2024-11-22 08:44:10.174968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:35.111 [2024-11-22 08:44:10.174979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:23:35.111 [2024-11-22 08:44:10.174988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.111 [2024-11-22 08:44:10.176926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.111 [2024-11-22 08:44:10.176971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:35.111 [2024-11-22 08:44:10.176983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:23:35.111 [2024-11-22 08:44:10.176993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.111 [2024-11-22 08:44:10.180209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.111 [2024-11-22 08:44:10.180253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:35.111 [2024-11-22 08:44:10.180265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:23:35.111 [2024-11-22 08:44:10.180274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.111 [2024-11-22 08:44:10.185611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.111 [2024-11-22 08:44:10.185646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:35.111 [2024-11-22 08:44:10.185658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:23:35.111 [2024-11-22 08:44:10.185667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.219627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.219666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:35.371 [2024-11-22 08:44:10.219678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.943 ms 00:23:35.371 [2024-11-22 08:44:10.219687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.240137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.240179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:35.371 [2024-11-22 08:44:10.240205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.412 ms 00:23:35.371 [2024-11-22 08:44:10.240231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.240368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.240381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:35.371 [2024-11-22 08:44:10.240391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:35.371 [2024-11-22 08:44:10.240400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.275459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.275498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:35.371 [2024-11-22 08:44:10.275526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.081 ms 00:23:35.371 [2024-11-22 08:44:10.275536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.309424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.309463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:35.371 [2024-11-22 08:44:10.309475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.891 ms 00:23:35.371 [2024-11-22 08:44:10.309484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.343038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.343075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:35.371 [2024-11-22 08:44:10.343103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.540 ms 00:23:35.371 [2024-11-22 08:44:10.343111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.371 [2024-11-22 08:44:10.377114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.371 [2024-11-22 08:44:10.377153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:35.371 [2024-11-22 08:44:10.377181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.965 ms 00:23:35.371 [2024-11-22 08:44:10.377191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.372 [2024-11-22 08:44:10.377244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:35.372 [2024-11-22 08:44:10.377274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:35.372 [2024-11-22 08:44:10.377997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:35.373 [2024-11-22 08:44:10.378342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:35.373 [2024-11-22 08:44:10.378356] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:35.373 [2024-11-22 08:44:10.378366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:35.373 [2024-11-22 08:44:10.378376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:35.373 [2024-11-22 08:44:10.378385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:35.373 [2024-11-22 08:44:10.378396] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:35.373 [2024-11-22 08:44:10.378408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:35.373 [2024-11-22 08:44:10.378418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:35.373 [2024-11-22 08:44:10.378427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:35.373 [2024-11-22 08:44:10.378435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:35.373 [2024-11-22 08:44:10.378444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:35.373 [2024-11-22 08:44:10.378454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.373 [2024-11-22 08:44:10.378468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:35.373 [2024-11-22 08:44:10.378478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:23:35.373 [2024-11-22 08:44:10.378488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.373 [2024-11-22 08:44:10.398227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.373 [2024-11-22 08:44:10.398262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:35.373 [2024-11-22 08:44:10.398275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.750 ms 00:23:35.373 [2024-11-22 08:44:10.398285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.373 [2024-11-22 08:44:10.398878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.373 [2024-11-22 08:44:10.398902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:35.373 [2024-11-22 08:44:10.398913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:23:35.373 [2024-11-22 08:44:10.398923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.452658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.452695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.633 [2024-11-22 08:44:10.452708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.452718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.452816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.452829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.633 [2024-11-22 08:44:10.452839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.452849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.452900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.452913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.633 [2024-11-22 08:44:10.452923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.452933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.452952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.452982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.633 [2024-11-22 08:44:10.452992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.453002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.574246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.574300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.633 [2024-11-22 08:44:10.574331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.574341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.633 [2024-11-22 08:44:10.670084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.633 [2024-11-22 08:44:10.670208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.633 [2024-11-22 08:44:10.670274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.633 [2024-11-22 08:44:10.670414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:35.633 [2024-11-22 08:44:10.670483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.633 [2024-11-22 08:44:10.670561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.633 [2024-11-22 08:44:10.670576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.633 [2024-11-22 08:44:10.670586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.633 [2024-11-22 08:44:10.670597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.634 [2024-11-22 08:44:10.670651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.634 [2024-11-22 08:44:10.670663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.634 [2024-11-22 08:44:10.670677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.634 [2024-11-22 08:44:10.670687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.634 [2024-11-22 08:44:10.670836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.736 ms, result 0 00:23:36.572 00:23:36.572 00:23:36.831 08:44:11 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78595 00:23:36.831 08:44:11 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78595 00:23:36.831 08:44:11 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78595 ']' 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.831 08:44:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:36.831 [2024-11-22 08:44:11.787429] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:36.831 [2024-11-22 08:44:11.787559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78595 ] 00:23:37.091 [2024-11-22 08:44:11.968014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.091 [2024-11-22 08:44:12.075238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.028 08:44:12 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.028 08:44:12 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:38.028 08:44:12 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:38.289 [2024-11-22 08:44:13.118941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.289 [2024-11-22 08:44:13.119022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.289 [2024-11-22 08:44:13.304290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.304333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.289 [2024-11-22 08:44:13.304352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:38.289 [2024-11-22 08:44:13.304363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.307847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.307883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.289 [2024-11-22 08:44:13.307896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:23:38.289 [2024-11-22 08:44:13.307906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.308030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.289 [2024-11-22 08:44:13.309044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.289 [2024-11-22 08:44:13.309075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.309086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.289 [2024-11-22 08:44:13.309099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:23:38.289 [2024-11-22 08:44:13.309109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.310648] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.289 [2024-11-22 08:44:13.329647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.329691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.289 [2024-11-22 08:44:13.329720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.035 ms 00:23:38.289 [2024-11-22 08:44:13.329735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.329838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.329857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.289 [2024-11-22 08:44:13.329868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:38.289 [2024-11-22 08:44:13.329882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.336735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.336774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.289 [2024-11-22 08:44:13.336801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.808 ms 00:23:38.289 [2024-11-22 08:44:13.336816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.336948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.336977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.289 [2024-11-22 08:44:13.336989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:38.289 [2024-11-22 08:44:13.337003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.289 [2024-11-22 08:44:13.337043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.289 [2024-11-22 08:44:13.337060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.289 [2024-11-22 08:44:13.337070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:38.290 [2024-11-22 08:44:13.337085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.290 [2024-11-22 08:44:13.337109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:38.290 [2024-11-22 08:44:13.341849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.290 [2024-11-22 08:44:13.341878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.290 [2024-11-22 08:44:13.341910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:23:38.290 [2024-11-22 08:44:13.341920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.290 [2024-11-22 08:44:13.342004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.290 [2024-11-22 08:44:13.342017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.290 [2024-11-22 08:44:13.342033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:38.290 [2024-11-22 08:44:13.342048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.290 [2024-11-22 08:44:13.342074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.290 [2024-11-22 08:44:13.342098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.290 [2024-11-22 08:44:13.342146] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.290 [2024-11-22 08:44:13.342165] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:38.290 [2024-11-22 08:44:13.342273] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.290 [2024-11-22 08:44:13.342286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.290 [2024-11-22 08:44:13.342306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.290 [2024-11-22 08:44:13.342324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342341] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342353] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:38.290 [2024-11-22 08:44:13.342367] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.290 [2024-11-22 08:44:13.342377] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.290 [2024-11-22 08:44:13.342396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.290 [2024-11-22 08:44:13.342406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.290 [2024-11-22 08:44:13.342421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.290 [2024-11-22 08:44:13.342431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:23:38.290 [2024-11-22 08:44:13.342446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.290 [2024-11-22 08:44:13.342526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.290 [2024-11-22 08:44:13.342542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.290 [2024-11-22 08:44:13.342552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:38.290 [2024-11-22 08:44:13.342566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.290 [2024-11-22 08:44:13.342664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.290 [2024-11-22 08:44:13.342686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.290 [2024-11-22 08:44:13.342697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.290 [2024-11-22 08:44:13.342740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.290 [2024-11-22 08:44:13.342780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.290 [2024-11-22 08:44:13.342803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.290 [2024-11-22 08:44:13.342817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:38.290 [2024-11-22 08:44:13.342827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.290 [2024-11-22 08:44:13.342840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.290 [2024-11-22 08:44:13.342850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:38.290 [2024-11-22 08:44:13.342865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.290 [2024-11-22 08:44:13.342888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.290 [2024-11-22 08:44:13.342932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:38.290 [2024-11-22 08:44:13.342947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.290 [2024-11-22 08:44:13.342978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.290 [2024-11-22 08:44:13.342998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.290 [2024-11-22 08:44:13.343021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.290 [2024-11-22 08:44:13.343031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.290 [2024-11-22 08:44:13.343055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.290 [2024-11-22 08:44:13.343069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.290 [2024-11-22 08:44:13.343092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.290 [2024-11-22 08:44:13.343102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.290 [2024-11-22 08:44:13.343127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.290 [2024-11-22 08:44:13.343141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:38.290 [2024-11-22 08:44:13.343150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.290 [2024-11-22 08:44:13.343166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.290 [2024-11-22 08:44:13.343176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:38.290 [2024-11-22 08:44:13.343194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.290 [2024-11-22 08:44:13.343217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:38.290 [2024-11-22 08:44:13.343227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343241] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.290 [2024-11-22 08:44:13.343252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.290 [2024-11-22 08:44:13.343272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.290 [2024-11-22 08:44:13.343282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.290 [2024-11-22 08:44:13.343297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.290 [2024-11-22 08:44:13.343307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.290 [2024-11-22 08:44:13.343322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.290 [2024-11-22 08:44:13.343332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.290 [2024-11-22 08:44:13.343346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.290 [2024-11-22 08:44:13.343356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.290 [2024-11-22 08:44:13.343371] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.290 [2024-11-22 08:44:13.343384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.290 [2024-11-22 08:44:13.343404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:38.290 [2024-11-22 08:44:13.343415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:38.290 [2024-11-22 08:44:13.343431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:38.290 [2024-11-22 08:44:13.343442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:38.290 [2024-11-22 08:44:13.343457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:38.290 [2024-11-22 08:44:13.343467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:38.290 [2024-11-22 08:44:13.343483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:38.290 [2024-11-22 08:44:13.343493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:38.290 [2024-11-22 08:44:13.343508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:38.290 [2024-11-22 08:44:13.343518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:38.290 [2024-11-22 08:44:13.343533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:38.290 [2024-11-22 08:44:13.343545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:38.290 [2024-11-22 08:44:13.343560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:38.290 [2024-11-22 08:44:13.343571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:38.291 [2024-11-22 08:44:13.343586] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.291 [2024-11-22 08:44:13.343598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.291 [2024-11-22 08:44:13.343618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.291 [2024-11-22 08:44:13.343628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.291 [2024-11-22 08:44:13.343643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.291 [2024-11-22 08:44:13.343654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.291 [2024-11-22 08:44:13.343670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.291 [2024-11-22 08:44:13.343680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.291 [2024-11-22 08:44:13.343695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:23:38.291 [2024-11-22 08:44:13.343705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.381005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.381037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.550 [2024-11-22 08:44:13.381052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.291 ms 00:23:38.550 [2024-11-22 08:44:13.381062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.381174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.381187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.550 [2024-11-22 08:44:13.381200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:38.550 [2024-11-22 08:44:13.381209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.428715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.428746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.550 [2024-11-22 08:44:13.428786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.553 ms 00:23:38.550 [2024-11-22 08:44:13.428797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.428891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.428903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.550 [2024-11-22 08:44:13.428918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.550 [2024-11-22 08:44:13.428929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.429388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.429406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.550 [2024-11-22 08:44:13.429428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:23:38.550 [2024-11-22 08:44:13.429439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.429561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.429574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.550 [2024-11-22 08:44:13.429589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:38.550 [2024-11-22 08:44:13.429600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.450504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.450535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.550 [2024-11-22 08:44:13.450569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.908 ms 00:23:38.550 [2024-11-22 08:44:13.450579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.469254] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:38.550 [2024-11-22 08:44:13.469288] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:38.550 [2024-11-22 08:44:13.469323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.469333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:38.550 [2024-11-22 08:44:13.469349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.653 ms 00:23:38.550 [2024-11-22 08:44:13.469359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.550 [2024-11-22 08:44:13.497506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.550 [2024-11-22 08:44:13.497541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:38.551 [2024-11-22 08:44:13.497572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.109 ms 00:23:38.551 [2024-11-22 08:44:13.497583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.551 [2024-11-22 08:44:13.514794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.551 [2024-11-22 08:44:13.514826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:38.551 [2024-11-22 08:44:13.514859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.153 ms 00:23:38.551 [2024-11-22 08:44:13.514869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.551 [2024-11-22 08:44:13.532008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.551 [2024-11-22 08:44:13.532040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:38.551 [2024-11-22 08:44:13.532070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.089 ms 00:23:38.551 [2024-11-22 08:44:13.532080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.551 [2024-11-22 08:44:13.532795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.551 [2024-11-22 08:44:13.532817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.551 [2024-11-22 08:44:13.532831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:23:38.551 [2024-11-22 08:44:13.532840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.641428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.641487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:38.811 [2024-11-22 08:44:13.641527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.730 ms 00:23:38.811 [2024-11-22 08:44:13.641538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.651891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:38.811 [2024-11-22 08:44:13.667471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.667523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.811 [2024-11-22 08:44:13.667543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.859 ms 00:23:38.811 [2024-11-22 08:44:13.667557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.667646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.667664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:38.811 [2024-11-22 08:44:13.667675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:38.811 [2024-11-22 08:44:13.667689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.667739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.667754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.811 [2024-11-22 08:44:13.667765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:38.811 [2024-11-22 08:44:13.667779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.667807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.667822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.811 [2024-11-22 08:44:13.667831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:38.811 [2024-11-22 08:44:13.667845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.667886] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:38.811 [2024-11-22 08:44:13.667906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.667916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:38.811 [2024-11-22 08:44:13.667936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:38.811 [2024-11-22 08:44:13.667946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.702263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.702298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.811 [2024-11-22 08:44:13.702333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.314 ms 00:23:38.811 [2024-11-22 08:44:13.702354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.702472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.811 [2024-11-22 08:44:13.702486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.811 [2024-11-22 08:44:13.702502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:38.811 [2024-11-22 08:44:13.702516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.811 [2024-11-22 08:44:13.703527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.811 [2024-11-22 08:44:13.707561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.536 ms, result 0 00:23:38.811 [2024-11-22 08:44:13.708894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.811 Some configs were skipped because the RPC state that can call them passed over. 00:23:38.811 08:44:13 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:39.071 [2024-11-22 08:44:13.951823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.071 [2024-11-22 08:44:13.951878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:39.071 [2024-11-22 08:44:13.951893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:23:39.071 [2024-11-22 08:44:13.951906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.071 [2024-11-22 08:44:13.951943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.637 ms, result 0 00:23:39.071 true 00:23:39.071 08:44:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:39.331 [2024-11-22 08:44:14.159523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.331 [2024-11-22 08:44:14.159571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:39.331 [2024-11-22 08:44:14.159589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:23:39.331 [2024-11-22 08:44:14.159600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.331 [2024-11-22 08:44:14.159647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.463 ms, result 0 00:23:39.331 true 00:23:39.331 08:44:14 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78595 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78595 ']' 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78595 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78595 00:23:39.331 killing process with pid 78595 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78595' 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78595 00:23:39.331 08:44:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78595 00:23:40.268 [2024-11-22 08:44:15.288450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.288499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.268 [2024-11-22 08:44:15.288513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.268 [2024-11-22 08:44:15.288525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.288547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:40.268 [2024-11-22 08:44:15.292674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.292703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.268 [2024-11-22 08:44:15.292719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.113 ms 00:23:40.268 [2024-11-22 08:44:15.292728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.293022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.293040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.268 [2024-11-22 08:44:15.293054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:23:40.268 [2024-11-22 08:44:15.293063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.296453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.296486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.268 [2024-11-22 08:44:15.296503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:23:40.268 [2024-11-22 08:44:15.296513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.301904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.301935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.268 [2024-11-22 08:44:15.301949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.359 ms 00:23:40.268 [2024-11-22 08:44:15.301966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.316156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.316186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.268 [2024-11-22 08:44:15.316203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.155 ms 00:23:40.268 [2024-11-22 08:44:15.316220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.326310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.326343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.268 [2024-11-22 08:44:15.326377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.036 ms 00:23:40.268 [2024-11-22 08:44:15.326387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.326525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.326538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.268 [2024-11-22 08:44:15.326551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:40.268 [2024-11-22 08:44:15.326560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.268 [2024-11-22 08:44:15.341394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.268 [2024-11-22 08:44:15.341424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:40.268 [2024-11-22 08:44:15.341438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:23:40.268 [2024-11-22 08:44:15.341447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.530 [2024-11-22 08:44:15.356431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.530 [2024-11-22 08:44:15.356460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:40.530 [2024-11-22 08:44:15.356482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.950 ms 00:23:40.530 [2024-11-22 08:44:15.356491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.530 [2024-11-22 08:44:15.370491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.530 [2024-11-22 08:44:15.370521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.530 [2024-11-22 08:44:15.370540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.965 ms 00:23:40.530 [2024-11-22 08:44:15.370549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.530 [2024-11-22 08:44:15.384624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.530 [2024-11-22 08:44:15.384654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:40.530 [2024-11-22 08:44:15.384671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:23:40.530 [2024-11-22 08:44:15.384680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.530 [2024-11-22 08:44:15.384745] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:40.530 [2024-11-22 08:44:15.384760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.384974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:40.530 [2024-11-22 08:44:15.385311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.385998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:40.531 [2024-11-22 08:44:15.386109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:40.531 [2024-11-22 08:44:15.386133] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:40.531 [2024-11-22 08:44:15.386155] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:40.531 [2024-11-22 08:44:15.386176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:40.531 [2024-11-22 08:44:15.386186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:40.531 [2024-11-22 08:44:15.386199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:40.531 [2024-11-22 08:44:15.386208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:40.531 [2024-11-22 08:44:15.386230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:40.531 [2024-11-22 08:44:15.386240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:40.531 [2024-11-22 08:44:15.386251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:40.531 [2024-11-22 08:44:15.386259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:40.531 [2024-11-22 08:44:15.386270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.531 [2024-11-22 08:44:15.386280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:40.531 [2024-11-22 08:44:15.386292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:23:40.531 [2024-11-22 08:44:15.386301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.405059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.531 [2024-11-22 08:44:15.405088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:40.531 [2024-11-22 08:44:15.405106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.754 ms 00:23:40.531 [2024-11-22 08:44:15.405115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.405680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.531 [2024-11-22 08:44:15.405703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:40.531 [2024-11-22 08:44:15.405733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:23:40.531 [2024-11-22 08:44:15.405745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.471509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.531 [2024-11-22 08:44:15.471541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.531 [2024-11-22 08:44:15.471574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.531 [2024-11-22 08:44:15.471585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.471670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.531 [2024-11-22 08:44:15.471693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.531 [2024-11-22 08:44:15.471707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.531 [2024-11-22 08:44:15.471722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.471772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.531 [2024-11-22 08:44:15.471784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.531 [2024-11-22 08:44:15.471802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.531 [2024-11-22 08:44:15.471812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.531 [2024-11-22 08:44:15.471833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.531 [2024-11-22 08:44:15.471843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.531 [2024-11-22 08:44:15.471858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.531 [2024-11-22 08:44:15.471867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.532 [2024-11-22 08:44:15.588897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.532 [2024-11-22 08:44:15.588941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.532 [2024-11-22 08:44:15.588965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.532 [2024-11-22 08:44:15.588976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.684868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.684910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.792 [2024-11-22 08:44:15.684929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.684944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.792 [2024-11-22 08:44:15.685084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.792 [2024-11-22 08:44:15.685153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.792 [2024-11-22 08:44:15.685320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:40.792 [2024-11-22 08:44:15.685400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.792 [2024-11-22 08:44:15.685487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.792 [2024-11-22 08:44:15.685555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.792 [2024-11-22 08:44:15.685570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.792 [2024-11-22 08:44:15.685580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.792 [2024-11-22 08:44:15.685726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.888 ms, result 0 00:23:41.730 08:44:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:41.730 [2024-11-22 08:44:16.740622] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:41.730 [2024-11-22 08:44:16.740748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78663 ] 00:23:41.989 [2024-11-22 08:44:16.923360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.989 [2024-11-22 08:44:17.023313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.559 [2024-11-22 08:44:17.378530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.559 [2024-11-22 08:44:17.378596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.559 [2024-11-22 08:44:17.540562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.540611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:42.559 [2024-11-22 08:44:17.540627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.559 [2024-11-22 08:44:17.540653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.543681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.543723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.559 [2024-11-22 08:44:17.543736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:23:42.559 [2024-11-22 08:44:17.543745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.543858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:42.559 [2024-11-22 08:44:17.544831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:42.559 [2024-11-22 08:44:17.544865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.544876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.559 [2024-11-22 08:44:17.544888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:23:42.559 [2024-11-22 08:44:17.544897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.546545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:42.559 [2024-11-22 08:44:17.564673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.564718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:42.559 [2024-11-22 08:44:17.564732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.157 ms 00:23:42.559 [2024-11-22 08:44:17.564757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.564855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.564870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:42.559 [2024-11-22 08:44:17.564882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:42.559 [2024-11-22 08:44:17.564892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.571618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.571648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.559 [2024-11-22 08:44:17.571660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.695 ms 00:23:42.559 [2024-11-22 08:44:17.571669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.571780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.571794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.559 [2024-11-22 08:44:17.571806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:42.559 [2024-11-22 08:44:17.571816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.571844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.571857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:42.559 [2024-11-22 08:44:17.571868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:42.559 [2024-11-22 08:44:17.571878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.571900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:42.559 [2024-11-22 08:44:17.576655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.576691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.559 [2024-11-22 08:44:17.576703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:23:42.559 [2024-11-22 08:44:17.576713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.559 [2024-11-22 08:44:17.576796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.559 [2024-11-22 08:44:17.576809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:42.559 [2024-11-22 08:44:17.576820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.560 [2024-11-22 08:44:17.576829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.560 [2024-11-22 08:44:17.576848] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:42.560 [2024-11-22 08:44:17.576872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:42.560 [2024-11-22 08:44:17.576906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:42.560 [2024-11-22 08:44:17.576923] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:42.560 [2024-11-22 08:44:17.577019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:42.560 [2024-11-22 08:44:17.577033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:42.560 [2024-11-22 08:44:17.577045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:42.560 [2024-11-22 08:44:17.577058] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577089] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:42.560 [2024-11-22 08:44:17.577110] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:42.560 [2024-11-22 08:44:17.577119] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:42.560 [2024-11-22 08:44:17.577129] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:42.560 [2024-11-22 08:44:17.577140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.560 [2024-11-22 08:44:17.577150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:42.560 [2024-11-22 08:44:17.577160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:42.560 [2024-11-22 08:44:17.577170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.560 [2024-11-22 08:44:17.577247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.560 [2024-11-22 08:44:17.577257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:42.560 [2024-11-22 08:44:17.577271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:42.560 [2024-11-22 08:44:17.577280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.560 [2024-11-22 08:44:17.577370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:42.560 [2024-11-22 08:44:17.577383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:42.560 [2024-11-22 08:44:17.577393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:42.560 [2024-11-22 08:44:17.577424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:42.560 [2024-11-22 08:44:17.577453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.560 [2024-11-22 08:44:17.577472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:42.560 [2024-11-22 08:44:17.577481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:42.560 [2024-11-22 08:44:17.577490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.560 [2024-11-22 08:44:17.577510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:42.560 [2024-11-22 08:44:17.577519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:42.560 [2024-11-22 08:44:17.577529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:42.560 [2024-11-22 08:44:17.577547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:42.560 [2024-11-22 08:44:17.577576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:42.560 [2024-11-22 08:44:17.577604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:42.560 [2024-11-22 08:44:17.577631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:42.560 [2024-11-22 08:44:17.577658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:42.560 [2024-11-22 08:44:17.577685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.560 [2024-11-22 08:44:17.577703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:42.560 [2024-11-22 08:44:17.577712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:42.560 [2024-11-22 08:44:17.577721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.560 [2024-11-22 08:44:17.577730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:42.560 [2024-11-22 08:44:17.577739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:42.560 [2024-11-22 08:44:17.577748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:42.560 [2024-11-22 08:44:17.577769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:42.560 [2024-11-22 08:44:17.577779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:42.560 [2024-11-22 08:44:17.577797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:42.560 [2024-11-22 08:44:17.577807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.560 [2024-11-22 08:44:17.577831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:42.560 [2024-11-22 08:44:17.577840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:42.560 [2024-11-22 08:44:17.577849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:42.560 [2024-11-22 08:44:17.577859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:42.560 [2024-11-22 08:44:17.577868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:42.560 [2024-11-22 08:44:17.577877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:42.560 [2024-11-22 08:44:17.577888] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:42.560 [2024-11-22 08:44:17.577900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.560 [2024-11-22 08:44:17.577911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:42.560 [2024-11-22 08:44:17.577922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:42.560 [2024-11-22 08:44:17.577932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:42.560 [2024-11-22 08:44:17.577942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:42.560 [2024-11-22 08:44:17.577952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:42.560 [2024-11-22 08:44:17.577962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:42.560 [2024-11-22 08:44:17.577983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:42.560 [2024-11-22 08:44:17.577994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:42.561 [2024-11-22 08:44:17.578004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:42.561 [2024-11-22 08:44:17.578015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:42.561 [2024-11-22 08:44:17.578068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:42.561 [2024-11-22 08:44:17.578079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:42.561 [2024-11-22 08:44:17.578101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:42.561 [2024-11-22 08:44:17.578111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:42.561 [2024-11-22 08:44:17.578122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:42.561 [2024-11-22 08:44:17.578133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.561 [2024-11-22 08:44:17.578143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:42.561 [2024-11-22 08:44:17.578157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:23:42.561 [2024-11-22 08:44:17.578167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.561 [2024-11-22 08:44:17.616440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.561 [2024-11-22 08:44:17.616476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.561 [2024-11-22 08:44:17.616489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.281 ms 00:23:42.561 [2024-11-22 08:44:17.616516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.561 [2024-11-22 08:44:17.616630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.561 [2024-11-22 08:44:17.616647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:42.561 [2024-11-22 08:44:17.616658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:42.561 [2024-11-22 08:44:17.616668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.688327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.688368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.821 [2024-11-22 08:44:17.688381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.752 ms 00:23:42.821 [2024-11-22 08:44:17.688395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.688505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.688519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.821 [2024-11-22 08:44:17.688530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:42.821 [2024-11-22 08:44:17.688540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.689002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.689024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.821 [2024-11-22 08:44:17.689036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:23:42.821 [2024-11-22 08:44:17.689052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.689169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.689183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.821 [2024-11-22 08:44:17.689194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:42.821 [2024-11-22 08:44:17.689204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.708129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.708165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.821 [2024-11-22 08:44:17.708194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.933 ms 00:23:42.821 [2024-11-22 08:44:17.708204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.725845] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:42.821 [2024-11-22 08:44:17.725887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:42.821 [2024-11-22 08:44:17.725901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.725911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:42.821 [2024-11-22 08:44:17.725938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.622 ms 00:23:42.821 [2024-11-22 08:44:17.725949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.753594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.753643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:42.821 [2024-11-22 08:44:17.753656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.600 ms 00:23:42.821 [2024-11-22 08:44:17.753666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.771149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.771188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:42.821 [2024-11-22 08:44:17.771200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.414 ms 00:23:42.821 [2024-11-22 08:44:17.771210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.788211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.788247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:42.821 [2024-11-22 08:44:17.788260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.937 ms 00:23:42.821 [2024-11-22 08:44:17.788269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.789042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.789074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:42.821 [2024-11-22 08:44:17.789086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:23:42.821 [2024-11-22 08:44:17.789096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.869699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.869757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:42.821 [2024-11-22 08:44:17.869772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.705 ms 00:23:42.821 [2024-11-22 08:44:17.869799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.821 [2024-11-22 08:44:17.880006] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:42.821 [2024-11-22 08:44:17.895276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.821 [2024-11-22 08:44:17.895321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:42.821 [2024-11-22 08:44:17.895335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.421 ms 00:23:42.822 [2024-11-22 08:44:17.895345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.822 [2024-11-22 08:44:17.895472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.822 [2024-11-22 08:44:17.895486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:42.822 [2024-11-22 08:44:17.895498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.822 [2024-11-22 08:44:17.895508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.822 [2024-11-22 08:44:17.895560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.822 [2024-11-22 08:44:17.895571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:42.822 [2024-11-22 08:44:17.895582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:42.822 [2024-11-22 08:44:17.895592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.822 [2024-11-22 08:44:17.895619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.822 [2024-11-22 08:44:17.895632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:42.822 [2024-11-22 08:44:17.895642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.822 [2024-11-22 08:44:17.895652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.822 [2024-11-22 08:44:17.895687] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:42.822 [2024-11-22 08:44:17.895699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.822 [2024-11-22 08:44:17.895710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:42.822 [2024-11-22 08:44:17.895720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:42.822 [2024-11-22 08:44:17.895730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.081 [2024-11-22 08:44:17.929830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.081 [2024-11-22 08:44:17.929868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:43.081 [2024-11-22 08:44:17.929881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.119 ms 00:23:43.081 [2024-11-22 08:44:17.929891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.081 [2024-11-22 08:44:17.930032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.081 [2024-11-22 08:44:17.930047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:43.081 [2024-11-22 08:44:17.930058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:43.081 [2024-11-22 08:44:17.930069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.081 [2024-11-22 08:44:17.931258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:43.081 [2024-11-22 08:44:17.935376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.017 ms, result 0 00:23:43.081 [2024-11-22 08:44:17.936278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:43.081 [2024-11-22 08:44:17.953977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.021  [2024-11-22T08:44:20.050Z] Copying: 27/256 [MB] (27 MBps) [2024-11-22T08:44:21.447Z] Copying: 51/256 [MB] (23 MBps) [2024-11-22T08:44:22.383Z] Copying: 74/256 [MB] (23 MBps) [2024-11-22T08:44:23.319Z] Copying: 98/256 [MB] (23 MBps) [2024-11-22T08:44:24.254Z] Copying: 122/256 [MB] (24 MBps) [2024-11-22T08:44:25.192Z] Copying: 146/256 [MB] (24 MBps) [2024-11-22T08:44:26.131Z] Copying: 171/256 [MB] (24 MBps) [2024-11-22T08:44:27.070Z] Copying: 196/256 [MB] (24 MBps) [2024-11-22T08:44:28.008Z] Copying: 220/256 [MB] (24 MBps) [2024-11-22T08:44:28.578Z] Copying: 245/256 [MB] (24 MBps) [2024-11-22T08:44:28.838Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-22 08:44:28.725058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.751 [2024-11-22 08:44:28.744966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.745022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.751 [2024-11-22 08:44:28.745038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:53.751 [2024-11-22 08:44:28.745057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.745086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:53.751 [2024-11-22 08:44:28.749559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.749598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.751 [2024-11-22 08:44:28.749612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.462 ms 00:23:53.751 [2024-11-22 08:44:28.749623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.749876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.749890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.751 [2024-11-22 08:44:28.749902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:23:53.751 [2024-11-22 08:44:28.749912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.752886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.752919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.751 [2024-11-22 08:44:28.752932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:23:53.751 [2024-11-22 08:44:28.752942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.758831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.758893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.751 [2024-11-22 08:44:28.758922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:23:53.751 [2024-11-22 08:44:28.758933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.793842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.793887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.751 [2024-11-22 08:44:28.793900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.878 ms 00:23:53.751 [2024-11-22 08:44:28.793910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.814603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.814658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.751 [2024-11-22 08:44:28.814671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.640 ms 00:23:53.751 [2024-11-22 08:44:28.814685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.751 [2024-11-22 08:44:28.814832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.751 [2024-11-22 08:44:28.814846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.751 [2024-11-22 08:44:28.814857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:53.751 [2024-11-22 08:44:28.814867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.012 [2024-11-22 08:44:28.849222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.012 [2024-11-22 08:44:28.849256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:54.012 [2024-11-22 08:44:28.849268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.381 ms 00:23:54.012 [2024-11-22 08:44:28.849277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.012 [2024-11-22 08:44:28.883004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.012 [2024-11-22 08:44:28.883037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:54.012 [2024-11-22 08:44:28.883049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.710 ms 00:23:54.012 [2024-11-22 08:44:28.883074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.012 [2024-11-22 08:44:28.917848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.012 [2024-11-22 08:44:28.917885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:54.012 [2024-11-22 08:44:28.917897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.774 ms 00:23:54.012 [2024-11-22 08:44:28.917906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.012 [2024-11-22 08:44:28.951519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.012 [2024-11-22 08:44:28.951557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:54.012 [2024-11-22 08:44:28.951585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.562 ms 00:23:54.012 [2024-11-22 08:44:28.951596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.012 [2024-11-22 08:44:28.951651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:54.012 [2024-11-22 08:44:28.951668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.951990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:54.012 [2024-11-22 08:44:28.952149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:54.013 [2024-11-22 08:44:28.952772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:54.013 [2024-11-22 08:44:28.952782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 735ff4ad-5bc5-4e76-a241-d6b3bf5a6c86 00:23:54.013 [2024-11-22 08:44:28.952793] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:54.013 [2024-11-22 08:44:28.952803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:54.013 [2024-11-22 08:44:28.952813] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:54.013 [2024-11-22 08:44:28.952823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:54.013 [2024-11-22 08:44:28.952832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:54.013 [2024-11-22 08:44:28.952843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:54.013 [2024-11-22 08:44:28.952853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:54.013 [2024-11-22 08:44:28.952863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:54.013 [2024-11-22 08:44:28.952871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:54.013 [2024-11-22 08:44:28.952881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.013 [2024-11-22 08:44:28.952896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:54.013 [2024-11-22 08:44:28.952907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:23:54.013 [2024-11-22 08:44:28.952917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.013 [2024-11-22 08:44:28.972456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.013 [2024-11-22 08:44:28.972491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:54.013 [2024-11-22 08:44:28.972519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.549 ms 00:23:54.013 [2024-11-22 08:44:28.972529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.013 [2024-11-22 08:44:28.973069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.013 [2024-11-22 08:44:28.973091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:54.013 [2024-11-22 08:44:28.973103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:23:54.013 [2024-11-22 08:44:28.973113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.013 [2024-11-22 08:44:29.024871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.013 [2024-11-22 08:44:29.024906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:54.013 [2024-11-22 08:44:29.024919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.013 [2024-11-22 08:44:29.024928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.013 [2024-11-22 08:44:29.025032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.013 [2024-11-22 08:44:29.025045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:54.013 [2024-11-22 08:44:29.025056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.014 [2024-11-22 08:44:29.025066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.014 [2024-11-22 08:44:29.025113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.014 [2024-11-22 08:44:29.025126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:54.014 [2024-11-22 08:44:29.025136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.014 [2024-11-22 08:44:29.025146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.014 [2024-11-22 08:44:29.025165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.014 [2024-11-22 08:44:29.025180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:54.014 [2024-11-22 08:44:29.025190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.014 [2024-11-22 08:44:29.025199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.277 [2024-11-22 08:44:29.144661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.277 [2024-11-22 08:44:29.144711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:54.277 [2024-11-22 08:44:29.144741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.277 [2024-11-22 08:44:29.144752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:54.278 [2024-11-22 08:44:29.243233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.278 [2024-11-22 08:44:29.243340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.278 [2024-11-22 08:44:29.243409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.278 [2024-11-22 08:44:29.243559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:54.278 [2024-11-22 08:44:29.243629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.278 [2024-11-22 08:44:29.243704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.278 [2024-11-22 08:44:29.243767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.278 [2024-11-22 08:44:29.243782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.278 [2024-11-22 08:44:29.243793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.278 [2024-11-22 08:44:29.243929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 499.788 ms, result 0 00:23:55.215 00:23:55.215 00:23:55.215 08:44:30 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:55.783 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:55.783 Process with pid 78595 is not found 00:23:55.783 08:44:30 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78595 00:23:55.783 08:44:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78595 ']' 00:23:55.783 08:44:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78595 00:23:55.783 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78595) - No such process 00:23:55.783 08:44:30 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78595 is not found' 00:23:55.783 ************************************ 00:23:55.783 END TEST ftl_trim 00:23:55.783 ************************************ 00:23:55.783 00:23:55.783 real 1m10.806s 00:23:55.783 user 1m35.054s 00:23:55.783 sys 0m6.688s 00:23:55.783 08:44:30 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.783 08:44:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:55.783 08:44:30 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:55.783 08:44:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:55.783 08:44:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.783 08:44:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:55.783 ************************************ 00:23:55.783 START TEST ftl_restore 00:23:55.783 ************************************ 00:23:55.783 08:44:30 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:56.043 * Looking for test storage... 00:23:56.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.043 08:44:30 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:56.043 08:44:30 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:23:56.043 08:44:30 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.043 08:44:31 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.043 --rc genhtml_branch_coverage=1 00:23:56.043 --rc genhtml_function_coverage=1 00:23:56.043 --rc genhtml_legend=1 00:23:56.043 --rc geninfo_all_blocks=1 00:23:56.043 --rc geninfo_unexecuted_blocks=1 00:23:56.043 00:23:56.043 ' 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.043 --rc genhtml_branch_coverage=1 00:23:56.043 --rc genhtml_function_coverage=1 00:23:56.043 --rc genhtml_legend=1 00:23:56.043 --rc geninfo_all_blocks=1 00:23:56.043 --rc geninfo_unexecuted_blocks=1 00:23:56.043 00:23:56.043 ' 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.043 --rc genhtml_branch_coverage=1 00:23:56.043 --rc genhtml_function_coverage=1 00:23:56.043 --rc genhtml_legend=1 00:23:56.043 --rc geninfo_all_blocks=1 00:23:56.043 --rc geninfo_unexecuted_blocks=1 00:23:56.043 00:23:56.043 ' 00:23:56.043 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.043 --rc genhtml_branch_coverage=1 00:23:56.043 --rc genhtml_function_coverage=1 00:23:56.043 --rc genhtml_legend=1 00:23:56.043 --rc geninfo_all_blocks=1 00:23:56.043 --rc geninfo_unexecuted_blocks=1 00:23:56.043 00:23:56.043 ' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.043 08:44:31 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.S0cRNlbwHM 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78872 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.044 08:44:31 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78872 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78872 ']' 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.044 08:44:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:56.303 [2024-11-22 08:44:31.203606] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:23:56.303 [2024-11-22 08:44:31.203747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78872 ] 00:23:56.303 [2024-11-22 08:44:31.381227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.562 [2024-11-22 08:44:31.491446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.500 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.500 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:57.500 08:44:32 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:57.760 08:44:32 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:57.760 08:44:32 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:57.760 08:44:32 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.760 { 00:23:57.760 "name": "nvme0n1", 00:23:57.760 "aliases": [ 00:23:57.760 "8aeff3e4-8eab-4e07-8cb3-ede7c7106518" 00:23:57.760 ], 00:23:57.760 "product_name": "NVMe disk", 00:23:57.760 "block_size": 4096, 00:23:57.760 "num_blocks": 1310720, 00:23:57.760 "uuid": "8aeff3e4-8eab-4e07-8cb3-ede7c7106518", 00:23:57.760 "numa_id": -1, 00:23:57.760 "assigned_rate_limits": { 00:23:57.760 "rw_ios_per_sec": 0, 00:23:57.760 "rw_mbytes_per_sec": 0, 00:23:57.760 "r_mbytes_per_sec": 0, 00:23:57.760 "w_mbytes_per_sec": 0 00:23:57.760 }, 00:23:57.760 "claimed": true, 00:23:57.760 "claim_type": "read_many_write_one", 00:23:57.760 "zoned": false, 00:23:57.760 "supported_io_types": { 00:23:57.760 "read": true, 00:23:57.760 "write": true, 00:23:57.760 "unmap": true, 00:23:57.760 "flush": true, 00:23:57.760 "reset": true, 00:23:57.760 "nvme_admin": true, 00:23:57.760 "nvme_io": true, 00:23:57.760 "nvme_io_md": false, 00:23:57.760 "write_zeroes": true, 00:23:57.760 "zcopy": false, 00:23:57.760 "get_zone_info": false, 00:23:57.760 "zone_management": false, 00:23:57.760 "zone_append": false, 00:23:57.760 "compare": true, 00:23:57.760 "compare_and_write": false, 00:23:57.760 "abort": true, 00:23:57.760 "seek_hole": false, 00:23:57.760 "seek_data": false, 00:23:57.760 "copy": true, 00:23:57.760 "nvme_iov_md": false 00:23:57.760 }, 00:23:57.760 "driver_specific": { 00:23:57.760 "nvme": [ 00:23:57.760 { 00:23:57.760 "pci_address": "0000:00:11.0", 00:23:57.760 "trid": { 00:23:57.760 "trtype": "PCIe", 00:23:57.760 "traddr": "0000:00:11.0" 00:23:57.760 }, 00:23:57.760 "ctrlr_data": { 00:23:57.760 "cntlid": 0, 00:23:57.760 "vendor_id": "0x1b36", 00:23:57.760 "model_number": "QEMU NVMe Ctrl", 00:23:57.760 "serial_number": "12341", 00:23:57.760 "firmware_revision": "8.0.0", 00:23:57.760 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:57.760 "oacs": { 00:23:57.760 "security": 0, 00:23:57.760 "format": 1, 00:23:57.760 "firmware": 0, 00:23:57.760 "ns_manage": 1 00:23:57.760 }, 00:23:57.760 "multi_ctrlr": false, 00:23:57.760 "ana_reporting": false 00:23:57.760 }, 00:23:57.760 "vs": { 00:23:57.760 "nvme_version": "1.4" 00:23:57.760 }, 00:23:57.760 "ns_data": { 00:23:57.760 "id": 1, 00:23:57.760 "can_share": false 00:23:57.760 } 00:23:57.760 } 00:23:57.760 ], 00:23:57.760 "mp_policy": "active_passive" 00:23:57.760 } 00:23:57.760 } 00:23:57.760 ]' 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.760 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:58.019 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:58.019 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:58.019 08:44:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:58.019 08:44:32 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:58.019 08:44:32 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:58.019 08:44:32 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:58.019 08:44:32 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:58.019 08:44:32 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:58.019 08:44:33 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=70ac0eee-d93c-44d5-bfac-d79903346d07 00:23:58.019 08:44:33 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:58.019 08:44:33 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70ac0eee-d93c-44d5-bfac-d79903346d07 00:23:58.279 08:44:33 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:58.539 08:44:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8dfe7d70-c2d4-434c-9c21-a1b75110141f 00:23:58.539 08:44:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8dfe7d70-c2d4-434c-9c21-a1b75110141f 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:58.799 08:44:33 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:58.799 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:58.799 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:58.799 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:58.799 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:58.799 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:59.058 { 00:23:59.058 "name": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.058 "aliases": [ 00:23:59.058 "lvs/nvme0n1p0" 00:23:59.058 ], 00:23:59.058 "product_name": "Logical Volume", 00:23:59.058 "block_size": 4096, 00:23:59.058 "num_blocks": 26476544, 00:23:59.058 "uuid": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.058 "assigned_rate_limits": { 00:23:59.058 "rw_ios_per_sec": 0, 00:23:59.058 "rw_mbytes_per_sec": 0, 00:23:59.058 "r_mbytes_per_sec": 0, 00:23:59.058 "w_mbytes_per_sec": 0 00:23:59.058 }, 00:23:59.058 "claimed": false, 00:23:59.058 "zoned": false, 00:23:59.058 "supported_io_types": { 00:23:59.058 "read": true, 00:23:59.058 "write": true, 00:23:59.058 "unmap": true, 00:23:59.058 "flush": false, 00:23:59.058 "reset": true, 00:23:59.058 "nvme_admin": false, 00:23:59.058 "nvme_io": false, 00:23:59.058 "nvme_io_md": false, 00:23:59.058 "write_zeroes": true, 00:23:59.058 "zcopy": false, 00:23:59.058 "get_zone_info": false, 00:23:59.058 "zone_management": false, 00:23:59.058 "zone_append": false, 00:23:59.058 "compare": false, 00:23:59.058 "compare_and_write": false, 00:23:59.058 "abort": false, 00:23:59.058 "seek_hole": true, 00:23:59.058 "seek_data": true, 00:23:59.058 "copy": false, 00:23:59.058 "nvme_iov_md": false 00:23:59.058 }, 00:23:59.058 "driver_specific": { 00:23:59.058 "lvol": { 00:23:59.058 "lvol_store_uuid": "8dfe7d70-c2d4-434c-9c21-a1b75110141f", 00:23:59.058 "base_bdev": "nvme0n1", 00:23:59.058 "thin_provision": true, 00:23:59.058 "num_allocated_clusters": 0, 00:23:59.058 "snapshot": false, 00:23:59.058 "clone": false, 00:23:59.058 "esnap_clone": false 00:23:59.058 } 00:23:59.058 } 00:23:59.058 } 00:23:59.058 ]' 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:59.058 08:44:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:59.058 08:44:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:59.058 08:44:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:59.058 08:44:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:59.317 08:44:34 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:59.317 08:44:34 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:59.317 08:44:34 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.317 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.317 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:59.317 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:59.317 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:59.317 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:59.577 { 00:23:59.577 "name": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.577 "aliases": [ 00:23:59.577 "lvs/nvme0n1p0" 00:23:59.577 ], 00:23:59.577 "product_name": "Logical Volume", 00:23:59.577 "block_size": 4096, 00:23:59.577 "num_blocks": 26476544, 00:23:59.577 "uuid": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.577 "assigned_rate_limits": { 00:23:59.577 "rw_ios_per_sec": 0, 00:23:59.577 "rw_mbytes_per_sec": 0, 00:23:59.577 "r_mbytes_per_sec": 0, 00:23:59.577 "w_mbytes_per_sec": 0 00:23:59.577 }, 00:23:59.577 "claimed": false, 00:23:59.577 "zoned": false, 00:23:59.577 "supported_io_types": { 00:23:59.577 "read": true, 00:23:59.577 "write": true, 00:23:59.577 "unmap": true, 00:23:59.577 "flush": false, 00:23:59.577 "reset": true, 00:23:59.577 "nvme_admin": false, 00:23:59.577 "nvme_io": false, 00:23:59.577 "nvme_io_md": false, 00:23:59.577 "write_zeroes": true, 00:23:59.577 "zcopy": false, 00:23:59.577 "get_zone_info": false, 00:23:59.577 "zone_management": false, 00:23:59.577 "zone_append": false, 00:23:59.577 "compare": false, 00:23:59.577 "compare_and_write": false, 00:23:59.577 "abort": false, 00:23:59.577 "seek_hole": true, 00:23:59.577 "seek_data": true, 00:23:59.577 "copy": false, 00:23:59.577 "nvme_iov_md": false 00:23:59.577 }, 00:23:59.577 "driver_specific": { 00:23:59.577 "lvol": { 00:23:59.577 "lvol_store_uuid": "8dfe7d70-c2d4-434c-9c21-a1b75110141f", 00:23:59.577 "base_bdev": "nvme0n1", 00:23:59.577 "thin_provision": true, 00:23:59.577 "num_allocated_clusters": 0, 00:23:59.577 "snapshot": false, 00:23:59.577 "clone": false, 00:23:59.577 "esnap_clone": false 00:23:59.577 } 00:23:59.577 } 00:23:59.577 } 00:23:59.577 ]' 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:59.577 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:59.577 08:44:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:59.577 08:44:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:59.836 08:44:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:59.836 08:44:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3053ea7-88c1-44c4-8d62-e21b82c03076 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:59.836 { 00:23:59.836 "name": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.836 "aliases": [ 00:23:59.836 "lvs/nvme0n1p0" 00:23:59.836 ], 00:23:59.836 "product_name": "Logical Volume", 00:23:59.836 "block_size": 4096, 00:23:59.836 "num_blocks": 26476544, 00:23:59.836 "uuid": "d3053ea7-88c1-44c4-8d62-e21b82c03076", 00:23:59.836 "assigned_rate_limits": { 00:23:59.836 "rw_ios_per_sec": 0, 00:23:59.836 "rw_mbytes_per_sec": 0, 00:23:59.836 "r_mbytes_per_sec": 0, 00:23:59.836 "w_mbytes_per_sec": 0 00:23:59.836 }, 00:23:59.836 "claimed": false, 00:23:59.836 "zoned": false, 00:23:59.836 "supported_io_types": { 00:23:59.836 "read": true, 00:23:59.836 "write": true, 00:23:59.836 "unmap": true, 00:23:59.836 "flush": false, 00:23:59.836 "reset": true, 00:23:59.836 "nvme_admin": false, 00:23:59.836 "nvme_io": false, 00:23:59.836 "nvme_io_md": false, 00:23:59.836 "write_zeroes": true, 00:23:59.836 "zcopy": false, 00:23:59.836 "get_zone_info": false, 00:23:59.836 "zone_management": false, 00:23:59.836 "zone_append": false, 00:23:59.836 "compare": false, 00:23:59.836 "compare_and_write": false, 00:23:59.836 "abort": false, 00:23:59.836 "seek_hole": true, 00:23:59.836 "seek_data": true, 00:23:59.836 "copy": false, 00:23:59.836 "nvme_iov_md": false 00:23:59.836 }, 00:23:59.836 "driver_specific": { 00:23:59.836 "lvol": { 00:23:59.836 "lvol_store_uuid": "8dfe7d70-c2d4-434c-9c21-a1b75110141f", 00:23:59.836 "base_bdev": "nvme0n1", 00:23:59.836 "thin_provision": true, 00:23:59.836 "num_allocated_clusters": 0, 00:23:59.836 "snapshot": false, 00:23:59.836 "clone": false, 00:23:59.836 "esnap_clone": false 00:23:59.836 } 00:23:59.836 } 00:23:59.836 } 00:23:59.836 ]' 00:23:59.836 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:00.097 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:00.097 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:00.097 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:00.097 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:00.097 08:44:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d3053ea7-88c1-44c4-8d62-e21b82c03076 --l2p_dram_limit 10' 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:00.097 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:00.097 08:44:34 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d3053ea7-88c1-44c4-8d62-e21b82c03076 --l2p_dram_limit 10 -c nvc0n1p0 00:24:00.097 [2024-11-22 08:44:35.158917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.158980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:00.097 [2024-11-22 08:44:35.159001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:00.097 [2024-11-22 08:44:35.159028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.159096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.159109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.097 [2024-11-22 08:44:35.159122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:00.097 [2024-11-22 08:44:35.159132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.159163] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:00.097 [2024-11-22 08:44:35.160168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:00.097 [2024-11-22 08:44:35.160209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.160221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.097 [2024-11-22 08:44:35.160234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:24:00.097 [2024-11-22 08:44:35.160245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.160327] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:24:00.097 [2024-11-22 08:44:35.161732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.161768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:00.097 [2024-11-22 08:44:35.161781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:00.097 [2024-11-22 08:44:35.161794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.169344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.169375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.097 [2024-11-22 08:44:35.169390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.520 ms 00:24:00.097 [2024-11-22 08:44:35.169419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.169515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.169531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.097 [2024-11-22 08:44:35.169542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:00.097 [2024-11-22 08:44:35.169559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.169622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.169637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:00.097 [2024-11-22 08:44:35.169648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:00.097 [2024-11-22 08:44:35.169663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.169690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:00.097 [2024-11-22 08:44:35.174598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.174657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.097 [2024-11-22 08:44:35.174674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.920 ms 00:24:00.097 [2024-11-22 08:44:35.174684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.174738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.174749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:00.097 [2024-11-22 08:44:35.174763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:00.097 [2024-11-22 08:44:35.174773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.174810] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:00.097 [2024-11-22 08:44:35.174933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:00.097 [2024-11-22 08:44:35.174954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:00.097 [2024-11-22 08:44:35.174967] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:00.097 [2024-11-22 08:44:35.174994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175006] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175020] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:00.097 [2024-11-22 08:44:35.175030] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:00.097 [2024-11-22 08:44:35.175046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:00.097 [2024-11-22 08:44:35.175056] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:00.097 [2024-11-22 08:44:35.175068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.175078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:00.097 [2024-11-22 08:44:35.175090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:24:00.097 [2024-11-22 08:44:35.175110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.175188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.097 [2024-11-22 08:44:35.175199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:00.097 [2024-11-22 08:44:35.175212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:00.097 [2024-11-22 08:44:35.175222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.097 [2024-11-22 08:44:35.175318] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:00.097 [2024-11-22 08:44:35.175331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:00.097 [2024-11-22 08:44:35.175344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:00.097 [2024-11-22 08:44:35.175376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:00.097 [2024-11-22 08:44:35.175408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.097 [2024-11-22 08:44:35.175429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:00.097 [2024-11-22 08:44:35.175439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:00.097 [2024-11-22 08:44:35.175450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.097 [2024-11-22 08:44:35.175459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:00.097 [2024-11-22 08:44:35.175471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:00.097 [2024-11-22 08:44:35.175480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:00.097 [2024-11-22 08:44:35.175505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:00.097 [2024-11-22 08:44:35.175539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.097 [2024-11-22 08:44:35.175560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:00.097 [2024-11-22 08:44:35.175569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:00.097 [2024-11-22 08:44:35.175581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.098 [2024-11-22 08:44:35.175590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:00.098 [2024-11-22 08:44:35.175601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.098 [2024-11-22 08:44:35.175622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:00.098 [2024-11-22 08:44:35.175630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.098 [2024-11-22 08:44:35.175651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:00.098 [2024-11-22 08:44:35.175665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.098 [2024-11-22 08:44:35.175686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:00.098 [2024-11-22 08:44:35.175695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:00.098 [2024-11-22 08:44:35.175706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.098 [2024-11-22 08:44:35.175715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:00.098 [2024-11-22 08:44:35.175727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:00.098 [2024-11-22 08:44:35.175736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:00.098 [2024-11-22 08:44:35.175756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:00.098 [2024-11-22 08:44:35.175767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:00.098 [2024-11-22 08:44:35.175789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:00.098 [2024-11-22 08:44:35.175798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.098 [2024-11-22 08:44:35.175811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.098 [2024-11-22 08:44:35.175821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:00.098 [2024-11-22 08:44:35.175837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:00.098 [2024-11-22 08:44:35.175846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:00.098 [2024-11-22 08:44:35.175858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:00.098 [2024-11-22 08:44:35.175868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:00.098 [2024-11-22 08:44:35.175879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:00.098 [2024-11-22 08:44:35.175893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:00.098 [2024-11-22 08:44:35.175907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.175921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:00.098 [2024-11-22 08:44:35.175934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:00.098 [2024-11-22 08:44:35.175944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:00.098 [2024-11-22 08:44:35.175966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:00.098 [2024-11-22 08:44:35.175977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:00.098 [2024-11-22 08:44:35.175989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:00.098 [2024-11-22 08:44:35.175999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:00.098 [2024-11-22 08:44:35.176012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:00.098 [2024-11-22 08:44:35.176023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:00.098 [2024-11-22 08:44:35.176039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:00.098 [2024-11-22 08:44:35.176096] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:00.098 [2024-11-22 08:44:35.176109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:00.098 [2024-11-22 08:44:35.176133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:00.098 [2024-11-22 08:44:35.176143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:00.098 [2024-11-22 08:44:35.176156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:00.098 [2024-11-22 08:44:35.176167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.098 [2024-11-22 08:44:35.176179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:00.098 [2024-11-22 08:44:35.176189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:24:00.098 [2024-11-22 08:44:35.176201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.098 [2024-11-22 08:44:35.176242] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:00.098 [2024-11-22 08:44:35.176260] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:04.358 [2024-11-22 08:44:38.764794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.764857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:04.358 [2024-11-22 08:44:38.764874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3594.377 ms 00:24:04.358 [2024-11-22 08:44:38.764903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.801534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.801588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.358 [2024-11-22 08:44:38.801603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.415 ms 00:24:04.358 [2024-11-22 08:44:38.801632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.801751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.801767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.358 [2024-11-22 08:44:38.801778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:04.358 [2024-11-22 08:44:38.801793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.842568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.842621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.358 [2024-11-22 08:44:38.842634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.782 ms 00:24:04.358 [2024-11-22 08:44:38.842647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.842699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.842717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.358 [2024-11-22 08:44:38.842728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:04.358 [2024-11-22 08:44:38.842740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.843243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.843272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.358 [2024-11-22 08:44:38.843284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:24:04.358 [2024-11-22 08:44:38.843297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.843394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.843408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.358 [2024-11-22 08:44:38.843421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:04.358 [2024-11-22 08:44:38.843437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.863808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.863856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.358 [2024-11-22 08:44:38.863869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.383 ms 00:24:04.358 [2024-11-22 08:44:38.863880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.875928] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:04.358 [2024-11-22 08:44:38.879229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.879257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.358 [2024-11-22 08:44:38.879272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.266 ms 00:24:04.358 [2024-11-22 08:44:38.879298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.986295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.986345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:04.358 [2024-11-22 08:44:38.986380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.133 ms 00:24:04.358 [2024-11-22 08:44:38.986390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:38.986572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:38.986589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.358 [2024-11-22 08:44:38.986607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:24:04.358 [2024-11-22 08:44:38.986627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.021999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.022038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:04.358 [2024-11-22 08:44:39.022054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.373 ms 00:24:04.358 [2024-11-22 08:44:39.022064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.056497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.056546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:04.358 [2024-11-22 08:44:39.056563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.424 ms 00:24:04.358 [2024-11-22 08:44:39.056573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.057325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.057354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:04.358 [2024-11-22 08:44:39.057369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:24:04.358 [2024-11-22 08:44:39.057379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.154354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.154396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:04.358 [2024-11-22 08:44:39.154415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.068 ms 00:24:04.358 [2024-11-22 08:44:39.154441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.189236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.189278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:04.358 [2024-11-22 08:44:39.189293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.766 ms 00:24:04.358 [2024-11-22 08:44:39.189303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.223696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.223732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:04.358 [2024-11-22 08:44:39.223746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.389 ms 00:24:04.358 [2024-11-22 08:44:39.223755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.257937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.257981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.358 [2024-11-22 08:44:39.257996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.179 ms 00:24:04.358 [2024-11-22 08:44:39.258006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.258068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.258079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.358 [2024-11-22 08:44:39.258095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:04.358 [2024-11-22 08:44:39.258106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.258203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.358 [2024-11-22 08:44:39.258215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.358 [2024-11-22 08:44:39.258231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:04.358 [2024-11-22 08:44:39.258241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.358 [2024-11-22 08:44:39.259275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4106.558 ms, result 0 00:24:04.358 { 00:24:04.358 "name": "ftl0", 00:24:04.358 "uuid": "4cf9305f-5939-4b7e-b9bf-6af33c4f18fe" 00:24:04.358 } 00:24:04.358 08:44:39 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:04.358 08:44:39 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:04.625 08:44:39 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:04.625 08:44:39 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:04.625 [2024-11-22 08:44:39.638207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.625 [2024-11-22 08:44:39.638261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.625 [2024-11-22 08:44:39.638277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:04.625 [2024-11-22 08:44:39.638299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.625 [2024-11-22 08:44:39.638325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.625 [2024-11-22 08:44:39.642467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.625 [2024-11-22 08:44:39.642500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.625 [2024-11-22 08:44:39.642515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:24:04.626 [2024-11-22 08:44:39.642526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.626 [2024-11-22 08:44:39.642781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.626 [2024-11-22 08:44:39.642795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.626 [2024-11-22 08:44:39.642812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:24:04.626 [2024-11-22 08:44:39.642823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.626 [2024-11-22 08:44:39.645343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.626 [2024-11-22 08:44:39.645363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.626 [2024-11-22 08:44:39.645378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:24:04.626 [2024-11-22 08:44:39.645388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.626 [2024-11-22 08:44:39.650326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.626 [2024-11-22 08:44:39.650362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.626 [2024-11-22 08:44:39.650380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:24:04.626 [2024-11-22 08:44:39.650390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.626 [2024-11-22 08:44:39.687358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.626 [2024-11-22 08:44:39.687395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.626 [2024-11-22 08:44:39.687427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.955 ms 00:24:04.626 [2024-11-22 08:44:39.687438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.708937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.708978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.897 [2024-11-22 08:44:39.709011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.483 ms 00:24:04.897 [2024-11-22 08:44:39.709021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.709170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.709187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.897 [2024-11-22 08:44:39.709201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:04.897 [2024-11-22 08:44:39.709211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.745022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.745056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.897 [2024-11-22 08:44:39.745088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.847 ms 00:24:04.897 [2024-11-22 08:44:39.745098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.780405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.780438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.897 [2024-11-22 08:44:39.780469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.319 ms 00:24:04.897 [2024-11-22 08:44:39.780479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.815306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.815347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.897 [2024-11-22 08:44:39.815362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.835 ms 00:24:04.897 [2024-11-22 08:44:39.815371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.849109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.897 [2024-11-22 08:44:39.849148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.897 [2024-11-22 08:44:39.849163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.665 ms 00:24:04.897 [2024-11-22 08:44:39.849172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.897 [2024-11-22 08:44:39.849232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.897 [2024-11-22 08:44:39.849248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.897 [2024-11-22 08:44:39.849273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.897 [2024-11-22 08:44:39.849284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.897 [2024-11-22 08:44:39.849298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.897 [2024-11-22 08:44:39.849308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.849978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.898 [2024-11-22 08:44:39.850426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.899 [2024-11-22 08:44:39.850504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.899 [2024-11-22 08:44:39.850520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:24:04.899 [2024-11-22 08:44:39.850531] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.899 [2024-11-22 08:44:39.850546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.899 [2024-11-22 08:44:39.850556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.899 [2024-11-22 08:44:39.850572] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.899 [2024-11-22 08:44:39.850582] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.899 [2024-11-22 08:44:39.850595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.899 [2024-11-22 08:44:39.850604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.899 [2024-11-22 08:44:39.850625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.899 [2024-11-22 08:44:39.850634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.899 [2024-11-22 08:44:39.850646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.899 [2024-11-22 08:44:39.850658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.899 [2024-11-22 08:44:39.850671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:24:04.899 [2024-11-22 08:44:39.850681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.869355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.899 [2024-11-22 08:44:39.869386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.899 [2024-11-22 08:44:39.869401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.643 ms 00:24:04.899 [2024-11-22 08:44:39.869410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.869940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.899 [2024-11-22 08:44:39.869975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.899 [2024-11-22 08:44:39.869989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:24:04.899 [2024-11-22 08:44:39.870002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.931459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.899 [2024-11-22 08:44:39.931496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.899 [2024-11-22 08:44:39.931527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.899 [2024-11-22 08:44:39.931537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.931594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.899 [2024-11-22 08:44:39.931604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.899 [2024-11-22 08:44:39.931617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.899 [2024-11-22 08:44:39.931630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.931721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.899 [2024-11-22 08:44:39.931734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.899 [2024-11-22 08:44:39.931747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.899 [2024-11-22 08:44:39.931757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.899 [2024-11-22 08:44:39.931781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.899 [2024-11-22 08:44:39.931791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.899 [2024-11-22 08:44:39.931803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.899 [2024-11-22 08:44:39.931813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.052580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.052629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.159 [2024-11-22 08:44:40.052645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.052671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.159 [2024-11-22 08:44:40.147388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.159 [2024-11-22 08:44:40.147536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.159 [2024-11-22 08:44:40.147625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.159 [2024-11-22 08:44:40.147809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.159 [2024-11-22 08:44:40.147889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.147941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.147954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.159 [2024-11-22 08:44:40.147967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.147977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.148048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.159 [2024-11-22 08:44:40.148061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.159 [2024-11-22 08:44:40.148074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.159 [2024-11-22 08:44:40.148084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.159 [2024-11-22 08:44:40.148213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.796 ms, result 0 00:24:05.159 true 00:24:05.159 08:44:40 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78872 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78872 ']' 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78872 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78872 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.159 killing process with pid 78872 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78872' 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78872 00:24:05.159 08:44:40 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78872 00:24:09.345 08:44:44 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:13.539 262144+0 records in 00:24:13.539 262144+0 records out 00:24:13.539 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.91008 s, 275 MB/s 00:24:13.539 08:44:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:14.916 08:44:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:14.916 [2024-11-22 08:44:49.900669] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:24:14.916 [2024-11-22 08:44:49.900799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79108 ] 00:24:15.176 [2024-11-22 08:44:50.082723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.176 [2024-11-22 08:44:50.197120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.746 [2024-11-22 08:44:50.543596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.746 [2024-11-22 08:44:50.543678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.746 [2024-11-22 08:44:50.707126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.707178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:15.746 [2024-11-22 08:44:50.707214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:15.746 [2024-11-22 08:44:50.707225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.707270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.707282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.746 [2024-11-22 08:44:50.707295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:15.746 [2024-11-22 08:44:50.707305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.707325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:15.746 [2024-11-22 08:44:50.708303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:15.746 [2024-11-22 08:44:50.708332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.708343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.746 [2024-11-22 08:44:50.708354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:24:15.746 [2024-11-22 08:44:50.708364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.709781] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:15.746 [2024-11-22 08:44:50.728283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.728324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:15.746 [2024-11-22 08:44:50.728337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.533 ms 00:24:15.746 [2024-11-22 08:44:50.728347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.728425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.728438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:15.746 [2024-11-22 08:44:50.728449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:15.746 [2024-11-22 08:44:50.728460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.735210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.735242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.746 [2024-11-22 08:44:50.735268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.692 ms 00:24:15.746 [2024-11-22 08:44:50.735278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.735356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.735368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.746 [2024-11-22 08:44:50.735380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:15.746 [2024-11-22 08:44:50.735389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.735427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.735439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:15.746 [2024-11-22 08:44:50.735449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:15.746 [2024-11-22 08:44:50.735458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.735482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:15.746 [2024-11-22 08:44:50.740217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.740252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.746 [2024-11-22 08:44:50.740279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:24:15.746 [2024-11-22 08:44:50.740292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.740320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.740331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:15.746 [2024-11-22 08:44:50.740341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:15.746 [2024-11-22 08:44:50.740350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.740399] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:15.746 [2024-11-22 08:44:50.740421] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:15.746 [2024-11-22 08:44:50.740454] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:15.746 [2024-11-22 08:44:50.740474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:15.746 [2024-11-22 08:44:50.740559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:15.746 [2024-11-22 08:44:50.740572] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:15.746 [2024-11-22 08:44:50.740600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:15.746 [2024-11-22 08:44:50.740613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:15.746 [2024-11-22 08:44:50.740624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:15.746 [2024-11-22 08:44:50.740635] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:15.746 [2024-11-22 08:44:50.740645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:15.746 [2024-11-22 08:44:50.740654] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:15.746 [2024-11-22 08:44:50.740664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:15.746 [2024-11-22 08:44:50.740677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.740687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:15.746 [2024-11-22 08:44:50.740698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:15.746 [2024-11-22 08:44:50.740707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.740777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.746 [2024-11-22 08:44:50.740788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:15.746 [2024-11-22 08:44:50.740798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:15.746 [2024-11-22 08:44:50.740807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.746 [2024-11-22 08:44:50.740899] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:15.746 [2024-11-22 08:44:50.740916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:15.746 [2024-11-22 08:44:50.740927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.746 [2024-11-22 08:44:50.740937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.746 [2024-11-22 08:44:50.740948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:15.746 [2024-11-22 08:44:50.740957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:15.746 [2024-11-22 08:44:50.740966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:15.746 [2024-11-22 08:44:50.740976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:15.746 [2024-11-22 08:44:50.741001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:15.746 [2024-11-22 08:44:50.741011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.746 [2024-11-22 08:44:50.741020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:15.746 [2024-11-22 08:44:50.741030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:15.746 [2024-11-22 08:44:50.741038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.746 [2024-11-22 08:44:50.741048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:15.747 [2024-11-22 08:44:50.741057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:15.747 [2024-11-22 08:44:50.741075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:15.747 [2024-11-22 08:44:50.741093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:15.747 [2024-11-22 08:44:50.741121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:15.747 [2024-11-22 08:44:50.741148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:15.747 [2024-11-22 08:44:50.741175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:15.747 [2024-11-22 08:44:50.741202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:15.747 [2024-11-22 08:44:50.741229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.747 [2024-11-22 08:44:50.741246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:15.747 [2024-11-22 08:44:50.741255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:15.747 [2024-11-22 08:44:50.741264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.747 [2024-11-22 08:44:50.741273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:15.747 [2024-11-22 08:44:50.741282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:15.747 [2024-11-22 08:44:50.741291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:15.747 [2024-11-22 08:44:50.741309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:15.747 [2024-11-22 08:44:50.741319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741328] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:15.747 [2024-11-22 08:44:50.741337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:15.747 [2024-11-22 08:44:50.741347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.747 [2024-11-22 08:44:50.741367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:15.747 [2024-11-22 08:44:50.741376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:15.747 [2024-11-22 08:44:50.741385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:15.747 [2024-11-22 08:44:50.741395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:15.747 [2024-11-22 08:44:50.741403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:15.747 [2024-11-22 08:44:50.741413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:15.747 [2024-11-22 08:44:50.741424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:15.747 [2024-11-22 08:44:50.741435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:15.747 [2024-11-22 08:44:50.741457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:15.747 [2024-11-22 08:44:50.741467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:15.747 [2024-11-22 08:44:50.741477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:15.747 [2024-11-22 08:44:50.741487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:15.747 [2024-11-22 08:44:50.741497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:15.747 [2024-11-22 08:44:50.741507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:15.747 [2024-11-22 08:44:50.741518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:15.747 [2024-11-22 08:44:50.741527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:15.747 [2024-11-22 08:44:50.741537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:15.747 [2024-11-22 08:44:50.741586] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:15.747 [2024-11-22 08:44:50.741601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:15.747 [2024-11-22 08:44:50.741622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:15.747 [2024-11-22 08:44:50.741632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:15.747 [2024-11-22 08:44:50.741642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:15.747 [2024-11-22 08:44:50.741653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.747 [2024-11-22 08:44:50.741663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:15.747 [2024-11-22 08:44:50.741673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:24:15.747 [2024-11-22 08:44:50.741683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.747 [2024-11-22 08:44:50.780874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.747 [2024-11-22 08:44:50.780910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.747 [2024-11-22 08:44:50.780924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.211 ms 00:24:15.747 [2024-11-22 08:44:50.780934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.747 [2024-11-22 08:44:50.781024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.747 [2024-11-22 08:44:50.781035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:15.747 [2024-11-22 08:44:50.781045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:15.747 [2024-11-22 08:44:50.781055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.858488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.858526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.007 [2024-11-22 08:44:50.858540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.505 ms 00:24:16.007 [2024-11-22 08:44:50.858550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.858603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.858623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.007 [2024-11-22 08:44:50.858635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:16.007 [2024-11-22 08:44:50.858648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.859156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.859179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.007 [2024-11-22 08:44:50.859192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:24:16.007 [2024-11-22 08:44:50.859202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.859318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.859332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.007 [2024-11-22 08:44:50.859342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:16.007 [2024-11-22 08:44:50.859358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.878020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.878051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.007 [2024-11-22 08:44:50.878066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.672 ms 00:24:16.007 [2024-11-22 08:44:50.878092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.897112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:16.007 [2024-11-22 08:44:50.897152] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.007 [2024-11-22 08:44:50.897166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.897176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.007 [2024-11-22 08:44:50.897203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.006 ms 00:24:16.007 [2024-11-22 08:44:50.897212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.925323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.925373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.007 [2024-11-22 08:44:50.925398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.115 ms 00:24:16.007 [2024-11-22 08:44:50.925408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.942585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.942641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.007 [2024-11-22 08:44:50.942669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:24:16.007 [2024-11-22 08:44:50.942679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.959763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.959798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.007 [2024-11-22 08:44:50.959811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.074 ms 00:24:16.007 [2024-11-22 08:44:50.959821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:50.960609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:50.960640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.007 [2024-11-22 08:44:50.960653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:24:16.007 [2024-11-22 08:44:50.960662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.007 [2024-11-22 08:44:51.041322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.007 [2024-11-22 08:44:51.041381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.007 [2024-11-22 08:44:51.041398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.761 ms 00:24:16.008 [2024-11-22 08:44:51.041436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.008 [2024-11-22 08:44:51.051883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:16.008 [2024-11-22 08:44:51.054137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.008 [2024-11-22 08:44:51.054165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.008 [2024-11-22 08:44:51.054178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.675 ms 00:24:16.008 [2024-11-22 08:44:51.054188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.008 [2024-11-22 08:44:51.054283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.008 [2024-11-22 08:44:51.054297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.008 [2024-11-22 08:44:51.054307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:16.008 [2024-11-22 08:44:51.054317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.008 [2024-11-22 08:44:51.054390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.008 [2024-11-22 08:44:51.054402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.008 [2024-11-22 08:44:51.054413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:16.008 [2024-11-22 08:44:51.054422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.008 [2024-11-22 08:44:51.054441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.008 [2024-11-22 08:44:51.054451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.008 [2024-11-22 08:44:51.054462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:16.008 [2024-11-22 08:44:51.054471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.008 [2024-11-22 08:44:51.054509] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.008 [2024-11-22 08:44:51.054521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.008 [2024-11-22 08:44:51.054553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.008 [2024-11-22 08:44:51.054564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:16.008 [2024-11-22 08:44:51.054573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.267 [2024-11-22 08:44:51.089438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.267 [2024-11-22 08:44:51.089480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.267 [2024-11-22 08:44:51.089510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.902 ms 00:24:16.267 [2024-11-22 08:44:51.089521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.267 [2024-11-22 08:44:51.089608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.268 [2024-11-22 08:44:51.089621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.268 [2024-11-22 08:44:51.089632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:16.268 [2024-11-22 08:44:51.089642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.268 [2024-11-22 08:44:51.090941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.929 ms, result 0 00:24:17.206  [2024-11-22T08:44:53.230Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-22T08:44:54.167Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-22T08:44:55.105Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-22T08:44:56.483Z] Copying: 102/1024 [MB] (29 MBps) [2024-11-22T08:44:57.421Z] Copying: 132/1024 [MB] (29 MBps) [2024-11-22T08:44:58.359Z] Copying: 156/1024 [MB] (24 MBps) [2024-11-22T08:44:59.298Z] Copying: 181/1024 [MB] (24 MBps) [2024-11-22T08:45:00.235Z] Copying: 205/1024 [MB] (24 MBps) [2024-11-22T08:45:01.174Z] Copying: 230/1024 [MB] (24 MBps) [2024-11-22T08:45:02.112Z] Copying: 254/1024 [MB] (24 MBps) [2024-11-22T08:45:03.492Z] Copying: 280/1024 [MB] (25 MBps) [2024-11-22T08:45:04.473Z] Copying: 305/1024 [MB] (25 MBps) [2024-11-22T08:45:05.408Z] Copying: 330/1024 [MB] (24 MBps) [2024-11-22T08:45:06.346Z] Copying: 354/1024 [MB] (24 MBps) [2024-11-22T08:45:07.283Z] Copying: 379/1024 [MB] (24 MBps) [2024-11-22T08:45:08.221Z] Copying: 403/1024 [MB] (24 MBps) [2024-11-22T08:45:09.159Z] Copying: 428/1024 [MB] (25 MBps) [2024-11-22T08:45:10.096Z] Copying: 454/1024 [MB] (25 MBps) [2024-11-22T08:45:11.479Z] Copying: 479/1024 [MB] (24 MBps) [2024-11-22T08:45:12.418Z] Copying: 503/1024 [MB] (24 MBps) [2024-11-22T08:45:13.355Z] Copying: 527/1024 [MB] (24 MBps) [2024-11-22T08:45:14.293Z] Copying: 552/1024 [MB] (24 MBps) [2024-11-22T08:45:15.229Z] Copying: 577/1024 [MB] (25 MBps) [2024-11-22T08:45:16.167Z] Copying: 602/1024 [MB] (24 MBps) [2024-11-22T08:45:17.103Z] Copying: 627/1024 [MB] (24 MBps) [2024-11-22T08:45:18.476Z] Copying: 651/1024 [MB] (24 MBps) [2024-11-22T08:45:19.408Z] Copying: 676/1024 [MB] (24 MBps) [2024-11-22T08:45:20.345Z] Copying: 701/1024 [MB] (24 MBps) [2024-11-22T08:45:21.282Z] Copying: 725/1024 [MB] (24 MBps) [2024-11-22T08:45:22.219Z] Copying: 749/1024 [MB] (24 MBps) [2024-11-22T08:45:23.156Z] Copying: 774/1024 [MB] (24 MBps) [2024-11-22T08:45:24.094Z] Copying: 799/1024 [MB] (24 MBps) [2024-11-22T08:45:25.470Z] Copying: 823/1024 [MB] (24 MBps) [2024-11-22T08:45:26.408Z] Copying: 848/1024 [MB] (24 MBps) [2024-11-22T08:45:27.345Z] Copying: 871/1024 [MB] (23 MBps) [2024-11-22T08:45:28.282Z] Copying: 892/1024 [MB] (20 MBps) [2024-11-22T08:45:29.219Z] Copying: 915/1024 [MB] (23 MBps) [2024-11-22T08:45:30.155Z] Copying: 937/1024 [MB] (21 MBps) [2024-11-22T08:45:31.091Z] Copying: 962/1024 [MB] (25 MBps) [2024-11-22T08:45:32.470Z] Copying: 986/1024 [MB] (23 MBps) [2024-11-22T08:45:32.729Z] Copying: 1010/1024 [MB] (24 MBps) [2024-11-22T08:45:32.729Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-22 08:45:32.600828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.600874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:57.642 [2024-11-22 08:45:32.600890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:57.642 [2024-11-22 08:45:32.600901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.600922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:57.642 [2024-11-22 08:45:32.605234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.605268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:57.642 [2024-11-22 08:45:32.605280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:24:57.642 [2024-11-22 08:45:32.605291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.607207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.607247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:57.642 [2024-11-22 08:45:32.607260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:24:57.642 [2024-11-22 08:45:32.607270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.623826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.623876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:57.642 [2024-11-22 08:45:32.623889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.566 ms 00:24:57.642 [2024-11-22 08:45:32.623898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.628647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.628690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:57.642 [2024-11-22 08:45:32.628701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.709 ms 00:24:57.642 [2024-11-22 08:45:32.628711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.663568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.663607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:57.642 [2024-11-22 08:45:32.663620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.842 ms 00:24:57.642 [2024-11-22 08:45:32.663629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.684024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.684064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:57.642 [2024-11-22 08:45:32.684093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.376 ms 00:24:57.642 [2024-11-22 08:45:32.684103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.684218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.684232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:57.642 [2024-11-22 08:45:32.684253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:57.642 [2024-11-22 08:45:32.684262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.642 [2024-11-22 08:45:32.719493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.642 [2024-11-22 08:45:32.719539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:57.642 [2024-11-22 08:45:32.719552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.272 ms 00:24:57.642 [2024-11-22 08:45:32.719561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.903 [2024-11-22 08:45:32.754024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.903 [2024-11-22 08:45:32.754060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:57.903 [2024-11-22 08:45:32.754089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.467 ms 00:24:57.903 [2024-11-22 08:45:32.754114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.903 [2024-11-22 08:45:32.787787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.903 [2024-11-22 08:45:32.787828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:57.903 [2024-11-22 08:45:32.787840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.691 ms 00:24:57.903 [2024-11-22 08:45:32.787849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.903 [2024-11-22 08:45:32.822986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.903 [2024-11-22 08:45:32.823037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:57.903 [2024-11-22 08:45:32.823050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.101 ms 00:24:57.903 [2024-11-22 08:45:32.823060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.903 [2024-11-22 08:45:32.823097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:57.903 [2024-11-22 08:45:32.823112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:57.903 [2024-11-22 08:45:32.823520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.823990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:57.904 [2024-11-22 08:45:32.824176] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:57.905 [2024-11-22 08:45:32.824197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:24:57.905 [2024-11-22 08:45:32.824207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:57.905 [2024-11-22 08:45:32.824223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:57.905 [2024-11-22 08:45:32.824232] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:57.905 [2024-11-22 08:45:32.824242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:57.905 [2024-11-22 08:45:32.824251] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:57.905 [2024-11-22 08:45:32.824260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:57.905 [2024-11-22 08:45:32.824270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:57.905 [2024-11-22 08:45:32.824291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:57.905 [2024-11-22 08:45:32.824300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:57.905 [2024-11-22 08:45:32.824310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.905 [2024-11-22 08:45:32.824320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:57.905 [2024-11-22 08:45:32.824330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:24:57.905 [2024-11-22 08:45:32.824339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.843700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.905 [2024-11-22 08:45:32.843737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:57.905 [2024-11-22 08:45:32.843760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.340 ms 00:24:57.905 [2024-11-22 08:45:32.843769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.844369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.905 [2024-11-22 08:45:32.844386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:57.905 [2024-11-22 08:45:32.844398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:24:57.905 [2024-11-22 08:45:32.844408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.894391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.905 [2024-11-22 08:45:32.894428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.905 [2024-11-22 08:45:32.894440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.905 [2024-11-22 08:45:32.894450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.894516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.905 [2024-11-22 08:45:32.894527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.905 [2024-11-22 08:45:32.894537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.905 [2024-11-22 08:45:32.894547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.894610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.905 [2024-11-22 08:45:32.894631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.905 [2024-11-22 08:45:32.894642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.905 [2024-11-22 08:45:32.894651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.905 [2024-11-22 08:45:32.894666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.905 [2024-11-22 08:45:32.894676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.905 [2024-11-22 08:45:32.894686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.905 [2024-11-22 08:45:32.894696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.010178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.010228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.165 [2024-11-22 08:45:33.010257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.010268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.102863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.102910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.165 [2024-11-22 08:45:33.102922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.102932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.165 [2024-11-22 08:45:33.103071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.165 [2024-11-22 08:45:33.103140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.165 [2024-11-22 08:45:33.103295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.165 [2024-11-22 08:45:33.103367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.165 [2024-11-22 08:45:33.103466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.165 [2024-11-22 08:45:33.103528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.165 [2024-11-22 08:45:33.103538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.165 [2024-11-22 08:45:33.103548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.165 [2024-11-22 08:45:33.103663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.615 ms, result 0 00:24:59.544 00:24:59.544 00:24:59.544 08:45:34 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:59.544 [2024-11-22 08:45:34.326893] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:24:59.544 [2024-11-22 08:45:34.327036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79559 ] 00:24:59.544 [2024-11-22 08:45:34.503051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.544 [2024-11-22 08:45:34.607875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.116 [2024-11-22 08:45:34.949894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.116 [2024-11-22 08:45:34.949979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.117 [2024-11-22 08:45:35.110633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.110701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:00.117 [2024-11-22 08:45:35.110722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:00.117 [2024-11-22 08:45:35.110733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.110779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.110791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.117 [2024-11-22 08:45:35.110804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:00.117 [2024-11-22 08:45:35.110814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.110834] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:00.117 [2024-11-22 08:45:35.111928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:00.117 [2024-11-22 08:45:35.111980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.111992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.117 [2024-11-22 08:45:35.112003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:25:00.117 [2024-11-22 08:45:35.112013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.113441] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:00.117 [2024-11-22 08:45:35.131204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.131246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:00.117 [2024-11-22 08:45:35.131259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.792 ms 00:25:00.117 [2024-11-22 08:45:35.131269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.131347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.131359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:00.117 [2024-11-22 08:45:35.131371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:00.117 [2024-11-22 08:45:35.131380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.138114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.138145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.117 [2024-11-22 08:45:35.138156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:25:00.117 [2024-11-22 08:45:35.138165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.138258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.138272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.117 [2024-11-22 08:45:35.138282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:00.117 [2024-11-22 08:45:35.138292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.138329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.138340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.117 [2024-11-22 08:45:35.138351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.117 [2024-11-22 08:45:35.138361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.138382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.117 [2024-11-22 08:45:35.143140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.143175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.117 [2024-11-22 08:45:35.143187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:25:00.117 [2024-11-22 08:45:35.143200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.143245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.143255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.117 [2024-11-22 08:45:35.143266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:00.117 [2024-11-22 08:45:35.143275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.143327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:00.117 [2024-11-22 08:45:35.143351] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:00.117 [2024-11-22 08:45:35.143384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:00.117 [2024-11-22 08:45:35.143405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:00.117 [2024-11-22 08:45:35.143493] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:00.117 [2024-11-22 08:45:35.143507] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.117 [2024-11-22 08:45:35.143519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:00.117 [2024-11-22 08:45:35.143533] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.117 [2024-11-22 08:45:35.143544] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.117 [2024-11-22 08:45:35.143555] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.117 [2024-11-22 08:45:35.143565] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.117 [2024-11-22 08:45:35.143575] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:00.117 [2024-11-22 08:45:35.143585] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:00.117 [2024-11-22 08:45:35.143599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.143609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.117 [2024-11-22 08:45:35.143620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:25:00.117 [2024-11-22 08:45:35.143629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.143699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.117 [2024-11-22 08:45:35.143710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.117 [2024-11-22 08:45:35.143720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:00.117 [2024-11-22 08:45:35.143729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.117 [2024-11-22 08:45:35.143821] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.117 [2024-11-22 08:45:35.143839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.117 [2024-11-22 08:45:35.143849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.117 [2024-11-22 08:45:35.143859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.117 [2024-11-22 08:45:35.143869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.117 [2024-11-22 08:45:35.143878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.117 [2024-11-22 08:45:35.143887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.117 [2024-11-22 08:45:35.143897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.117 [2024-11-22 08:45:35.143907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.117 [2024-11-22 08:45:35.143916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.117 [2024-11-22 08:45:35.143926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.117 [2024-11-22 08:45:35.143936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.117 [2024-11-22 08:45:35.143945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.117 [2024-11-22 08:45:35.143954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.117 [2024-11-22 08:45:35.143964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:00.117 [2024-11-22 08:45:35.143998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.117 [2024-11-22 08:45:35.144017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:00.117 [2024-11-22 08:45:35.144026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.117 [2024-11-22 08:45:35.144044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.117 [2024-11-22 08:45:35.144062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.117 [2024-11-22 08:45:35.144071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.117 [2024-11-22 08:45:35.144088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.117 [2024-11-22 08:45:35.144097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.117 [2024-11-22 08:45:35.144115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.117 [2024-11-22 08:45:35.144124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:00.117 [2024-11-22 08:45:35.144133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.117 [2024-11-22 08:45:35.144142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.118 [2024-11-22 08:45:35.144150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:00.118 [2024-11-22 08:45:35.144159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.118 [2024-11-22 08:45:35.144168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.118 [2024-11-22 08:45:35.144177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:00.118 [2024-11-22 08:45:35.144186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.118 [2024-11-22 08:45:35.144194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:00.118 [2024-11-22 08:45:35.144203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:00.118 [2024-11-22 08:45:35.144211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.118 [2024-11-22 08:45:35.144220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:00.118 [2024-11-22 08:45:35.144229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:00.118 [2024-11-22 08:45:35.144238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.118 [2024-11-22 08:45:35.144247] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.118 [2024-11-22 08:45:35.144257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.118 [2024-11-22 08:45:35.144266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.118 [2024-11-22 08:45:35.144276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.118 [2024-11-22 08:45:35.144286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.118 [2024-11-22 08:45:35.144295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.118 [2024-11-22 08:45:35.144304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.118 [2024-11-22 08:45:35.144313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.118 [2024-11-22 08:45:35.144321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.118 [2024-11-22 08:45:35.144331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.118 [2024-11-22 08:45:35.144340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.118 [2024-11-22 08:45:35.144353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.118 [2024-11-22 08:45:35.144375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:00.118 [2024-11-22 08:45:35.144385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:00.118 [2024-11-22 08:45:35.144395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:00.118 [2024-11-22 08:45:35.144406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:00.118 [2024-11-22 08:45:35.144416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:00.118 [2024-11-22 08:45:35.144426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:00.118 [2024-11-22 08:45:35.144436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:00.118 [2024-11-22 08:45:35.144446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:00.118 [2024-11-22 08:45:35.144456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:00.118 [2024-11-22 08:45:35.144506] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.118 [2024-11-22 08:45:35.144520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.118 [2024-11-22 08:45:35.144542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.118 [2024-11-22 08:45:35.144554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.118 [2024-11-22 08:45:35.144565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.118 [2024-11-22 08:45:35.144575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.118 [2024-11-22 08:45:35.144586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.118 [2024-11-22 08:45:35.144596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:25:00.118 [2024-11-22 08:45:35.144606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.118 [2024-11-22 08:45:35.183139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.118 [2024-11-22 08:45:35.183179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.118 [2024-11-22 08:45:35.183208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.553 ms 00:25:00.118 [2024-11-22 08:45:35.183219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.118 [2024-11-22 08:45:35.183298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.118 [2024-11-22 08:45:35.183309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.118 [2024-11-22 08:45:35.183319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:00.118 [2024-11-22 08:45:35.183329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.243330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.243369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.378 [2024-11-22 08:45:35.243398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.046 ms 00:25:00.378 [2024-11-22 08:45:35.243408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.243442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.243453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.378 [2024-11-22 08:45:35.243464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:00.378 [2024-11-22 08:45:35.243477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.243994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.244027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.378 [2024-11-22 08:45:35.244039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:25:00.378 [2024-11-22 08:45:35.244050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.244164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.244178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.378 [2024-11-22 08:45:35.244188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:00.378 [2024-11-22 08:45:35.244204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.261949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.261994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.378 [2024-11-22 08:45:35.262010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:25:00.378 [2024-11-22 08:45:35.262020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.279309] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:00.378 [2024-11-22 08:45:35.279351] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.378 [2024-11-22 08:45:35.279381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.279392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.378 [2024-11-22 08:45:35.279403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.276 ms 00:25:00.378 [2024-11-22 08:45:35.279412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.307704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.307750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.378 [2024-11-22 08:45:35.307762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.295 ms 00:25:00.378 [2024-11-22 08:45:35.307772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.324951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.325001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.378 [2024-11-22 08:45:35.325029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.143 ms 00:25:00.378 [2024-11-22 08:45:35.325039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.342329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.342367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.378 [2024-11-22 08:45:35.342378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.278 ms 00:25:00.378 [2024-11-22 08:45:35.342387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.343159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.343186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.378 [2024-11-22 08:45:35.343197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:25:00.378 [2024-11-22 08:45:35.343211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.425048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.425134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.378 [2024-11-22 08:45:35.425157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.948 ms 00:25:00.378 [2024-11-22 08:45:35.425168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.435426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.378 [2024-11-22 08:45:35.437827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.437856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.378 [2024-11-22 08:45:35.437868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.632 ms 00:25:00.378 [2024-11-22 08:45:35.437877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.437964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.437986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.378 [2024-11-22 08:45:35.437997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.378 [2024-11-22 08:45:35.438010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.438080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.438092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.378 [2024-11-22 08:45:35.438103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:00.378 [2024-11-22 08:45:35.438112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.378 [2024-11-22 08:45:35.438132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.378 [2024-11-22 08:45:35.438142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.378 [2024-11-22 08:45:35.438153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.378 [2024-11-22 08:45:35.438162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.379 [2024-11-22 08:45:35.438197] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.379 [2024-11-22 08:45:35.438212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.379 [2024-11-22 08:45:35.438222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.379 [2024-11-22 08:45:35.438233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:00.379 [2024-11-22 08:45:35.438243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.637 [2024-11-22 08:45:35.473344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.637 [2024-11-22 08:45:35.473385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.637 [2024-11-22 08:45:35.473398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.123 ms 00:25:00.637 [2024-11-22 08:45:35.473413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.637 [2024-11-22 08:45:35.473502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.637 [2024-11-22 08:45:35.473515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.637 [2024-11-22 08:45:35.473526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:00.637 [2024-11-22 08:45:35.473535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.637 [2024-11-22 08:45:35.474606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.150 ms, result 0 00:25:02.017  [2024-11-22T08:45:38.043Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-22T08:45:38.982Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-22T08:45:39.921Z] Copying: 74/1024 [MB] (25 MBps) [2024-11-22T08:45:40.860Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-22T08:45:41.799Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-22T08:45:42.735Z] Copying: 147/1024 [MB] (24 MBps) [2024-11-22T08:45:43.674Z] Copying: 172/1024 [MB] (25 MBps) [2024-11-22T08:45:45.054Z] Copying: 197/1024 [MB] (24 MBps) [2024-11-22T08:45:45.994Z] Copying: 222/1024 [MB] (24 MBps) [2024-11-22T08:45:46.932Z] Copying: 246/1024 [MB] (23 MBps) [2024-11-22T08:45:47.905Z] Copying: 270/1024 [MB] (23 MBps) [2024-11-22T08:45:48.849Z] Copying: 294/1024 [MB] (24 MBps) [2024-11-22T08:45:49.801Z] Copying: 319/1024 [MB] (25 MBps) [2024-11-22T08:45:50.738Z] Copying: 343/1024 [MB] (24 MBps) [2024-11-22T08:45:51.675Z] Copying: 367/1024 [MB] (24 MBps) [2024-11-22T08:45:53.052Z] Copying: 391/1024 [MB] (23 MBps) [2024-11-22T08:45:53.990Z] Copying: 415/1024 [MB] (24 MBps) [2024-11-22T08:45:54.927Z] Copying: 440/1024 [MB] (24 MBps) [2024-11-22T08:45:55.865Z] Copying: 465/1024 [MB] (25 MBps) [2024-11-22T08:45:56.802Z] Copying: 491/1024 [MB] (25 MBps) [2024-11-22T08:45:57.739Z] Copying: 515/1024 [MB] (24 MBps) [2024-11-22T08:45:58.678Z] Copying: 540/1024 [MB] (24 MBps) [2024-11-22T08:46:00.056Z] Copying: 565/1024 [MB] (25 MBps) [2024-11-22T08:46:00.993Z] Copying: 589/1024 [MB] (24 MBps) [2024-11-22T08:46:01.929Z] Copying: 614/1024 [MB] (24 MBps) [2024-11-22T08:46:02.884Z] Copying: 638/1024 [MB] (24 MBps) [2024-11-22T08:46:03.822Z] Copying: 663/1024 [MB] (25 MBps) [2024-11-22T08:46:04.760Z] Copying: 688/1024 [MB] (24 MBps) [2024-11-22T08:46:05.698Z] Copying: 712/1024 [MB] (23 MBps) [2024-11-22T08:46:06.635Z] Copying: 737/1024 [MB] (25 MBps) [2024-11-22T08:46:08.014Z] Copying: 763/1024 [MB] (26 MBps) [2024-11-22T08:46:08.952Z] Copying: 789/1024 [MB] (25 MBps) [2024-11-22T08:46:09.886Z] Copying: 814/1024 [MB] (25 MBps) [2024-11-22T08:46:10.823Z] Copying: 840/1024 [MB] (25 MBps) [2024-11-22T08:46:11.762Z] Copying: 864/1024 [MB] (24 MBps) [2024-11-22T08:46:12.699Z] Copying: 893/1024 [MB] (28 MBps) [2024-11-22T08:46:13.637Z] Copying: 917/1024 [MB] (24 MBps) [2024-11-22T08:46:15.017Z] Copying: 941/1024 [MB] (24 MBps) [2024-11-22T08:46:15.956Z] Copying: 966/1024 [MB] (24 MBps) [2024-11-22T08:46:16.946Z] Copying: 990/1024 [MB] (24 MBps) [2024-11-22T08:46:17.205Z] Copying: 1014/1024 [MB] (23 MBps) [2024-11-22T08:46:17.205Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-22 08:46:17.097430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.097510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.118 [2024-11-22 08:46:17.097533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.118 [2024-11-22 08:46:17.097549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.118 [2024-11-22 08:46:17.097581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.118 [2024-11-22 08:46:17.103722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.103772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.118 [2024-11-22 08:46:17.103799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:25:42.118 [2024-11-22 08:46:17.103814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.118 [2024-11-22 08:46:17.104474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.104501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.118 [2024-11-22 08:46:17.104513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:25:42.118 [2024-11-22 08:46:17.104523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.118 [2024-11-22 08:46:17.107400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.107427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.118 [2024-11-22 08:46:17.107440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:25:42.118 [2024-11-22 08:46:17.107450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.118 [2024-11-22 08:46:17.112623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.112662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.118 [2024-11-22 08:46:17.112674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.155 ms 00:25:42.118 [2024-11-22 08:46:17.112684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.118 [2024-11-22 08:46:17.147990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.118 [2024-11-22 08:46:17.148031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.118 [2024-11-22 08:46:17.148044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.290 ms 00:25:42.119 [2024-11-22 08:46:17.148053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.119 [2024-11-22 08:46:17.168098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.119 [2024-11-22 08:46:17.168138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.119 [2024-11-22 08:46:17.168152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.022 ms 00:25:42.119 [2024-11-22 08:46:17.168161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.119 [2024-11-22 08:46:17.168295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.119 [2024-11-22 08:46:17.168315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.119 [2024-11-22 08:46:17.168326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:42.119 [2024-11-22 08:46:17.168336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.378 [2024-11-22 08:46:17.202863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.378 [2024-11-22 08:46:17.202903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.378 [2024-11-22 08:46:17.202915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.567 ms 00:25:42.378 [2024-11-22 08:46:17.202924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.378 [2024-11-22 08:46:17.237067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.378 [2024-11-22 08:46:17.237126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.378 [2024-11-22 08:46:17.237154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.139 ms 00:25:42.378 [2024-11-22 08:46:17.237163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.378 [2024-11-22 08:46:17.270261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.378 [2024-11-22 08:46:17.270295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.378 [2024-11-22 08:46:17.270307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.114 ms 00:25:42.378 [2024-11-22 08:46:17.270316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.378 [2024-11-22 08:46:17.303952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.378 [2024-11-22 08:46:17.303996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.378 [2024-11-22 08:46:17.304008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.603 ms 00:25:42.378 [2024-11-22 08:46:17.304017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.378 [2024-11-22 08:46:17.304067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.378 [2024-11-22 08:46:17.304083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.378 [2024-11-22 08:46:17.304633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.304993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.379 [2024-11-22 08:46:17.305144] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.379 [2024-11-22 08:46:17.305157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:25:42.379 [2024-11-22 08:46:17.305168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:42.379 [2024-11-22 08:46:17.305178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:42.379 [2024-11-22 08:46:17.305187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:42.379 [2024-11-22 08:46:17.305197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:42.379 [2024-11-22 08:46:17.305206] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.379 [2024-11-22 08:46:17.305215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.379 [2024-11-22 08:46:17.305235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.379 [2024-11-22 08:46:17.305245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.379 [2024-11-22 08:46:17.305254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.379 [2024-11-22 08:46:17.305263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.379 [2024-11-22 08:46:17.305273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.379 [2024-11-22 08:46:17.305283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:25:42.379 [2024-11-22 08:46:17.305293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.324785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.379 [2024-11-22 08:46:17.324821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.379 [2024-11-22 08:46:17.324848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.473 ms 00:25:42.379 [2024-11-22 08:46:17.324859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.325440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.379 [2024-11-22 08:46:17.325463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.379 [2024-11-22 08:46:17.325474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:42.379 [2024-11-22 08:46:17.325489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.376279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.379 [2024-11-22 08:46:17.376316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.379 [2024-11-22 08:46:17.376344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.379 [2024-11-22 08:46:17.376355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.376406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.379 [2024-11-22 08:46:17.376417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.379 [2024-11-22 08:46:17.376427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.379 [2024-11-22 08:46:17.376442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.376504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.379 [2024-11-22 08:46:17.376516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.379 [2024-11-22 08:46:17.376526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.379 [2024-11-22 08:46:17.376536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.379 [2024-11-22 08:46:17.376551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.379 [2024-11-22 08:46:17.376561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.379 [2024-11-22 08:46:17.376571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.379 [2024-11-22 08:46:17.376580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.494931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.494989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.639 [2024-11-22 08:46:17.495019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.495029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.589663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.589711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.639 [2024-11-22 08:46:17.589724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.589749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.589834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.589846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.639 [2024-11-22 08:46:17.589856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.589867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.589915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.589926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.639 [2024-11-22 08:46:17.589936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.589946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.590070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.590083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.639 [2024-11-22 08:46:17.590094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.590119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.590153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.590165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.639 [2024-11-22 08:46:17.590176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.590185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.590222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.590238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.639 [2024-11-22 08:46:17.590248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.590257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.590296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.639 [2024-11-22 08:46:17.590307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.639 [2024-11-22 08:46:17.590318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.639 [2024-11-22 08:46:17.590328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.639 [2024-11-22 08:46:17.590444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.789 ms, result 0 00:25:43.577 00:25:43.577 00:25:43.577 08:46:18 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:45.483 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:45.483 08:46:20 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:45.483 [2024-11-22 08:46:20.471475] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:25:45.483 [2024-11-22 08:46:20.471603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80023 ] 00:25:45.742 [2024-11-22 08:46:20.647302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.742 [2024-11-22 08:46:20.750427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.312 [2024-11-22 08:46:21.101210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.312 [2024-11-22 08:46:21.101440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.312 [2024-11-22 08:46:21.261253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.261458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:46.312 [2024-11-22 08:46:21.261507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:46.312 [2024-11-22 08:46:21.261519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.261577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.261590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.312 [2024-11-22 08:46:21.261605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:46.312 [2024-11-22 08:46:21.261615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.261638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:46.312 [2024-11-22 08:46:21.262703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:46.312 [2024-11-22 08:46:21.262737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.262749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.312 [2024-11-22 08:46:21.262761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:25:46.312 [2024-11-22 08:46:21.262771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.264262] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:46.312 [2024-11-22 08:46:21.283292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.283458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:46.312 [2024-11-22 08:46:21.283494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:25:46.312 [2024-11-22 08:46:21.283506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.283603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.283617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:46.312 [2024-11-22 08:46:21.283628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:46.312 [2024-11-22 08:46:21.283638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.290446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.290601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.312 [2024-11-22 08:46:21.290642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:25:46.312 [2024-11-22 08:46:21.290653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.290741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.290754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.312 [2024-11-22 08:46:21.290765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:46.312 [2024-11-22 08:46:21.290775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.290815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.290827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:46.312 [2024-11-22 08:46:21.290837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.312 [2024-11-22 08:46:21.290847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.290871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:46.312 [2024-11-22 08:46:21.295564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.295596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.312 [2024-11-22 08:46:21.295608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:25:46.312 [2024-11-22 08:46:21.295637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.295668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.312 [2024-11-22 08:46:21.295679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:46.312 [2024-11-22 08:46:21.295690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:46.312 [2024-11-22 08:46:21.295699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.312 [2024-11-22 08:46:21.295752] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:46.312 [2024-11-22 08:46:21.295774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:46.312 [2024-11-22 08:46:21.295807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:46.312 [2024-11-22 08:46:21.295827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:46.312 [2024-11-22 08:46:21.295914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:46.312 [2024-11-22 08:46:21.295926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:46.312 [2024-11-22 08:46:21.295939] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:46.312 [2024-11-22 08:46:21.295951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:46.312 [2024-11-22 08:46:21.295963] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:46.312 [2024-11-22 08:46:21.295995] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:46.312 [2024-11-22 08:46:21.296005] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:46.313 [2024-11-22 08:46:21.296015] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:46.313 [2024-11-22 08:46:21.296025] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:46.313 [2024-11-22 08:46:21.296039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.313 [2024-11-22 08:46:21.296049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:46.313 [2024-11-22 08:46:21.296059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:46.313 [2024-11-22 08:46:21.296069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.313 [2024-11-22 08:46:21.296139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.313 [2024-11-22 08:46:21.296149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:46.313 [2024-11-22 08:46:21.296160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:46.313 [2024-11-22 08:46:21.296169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.313 [2024-11-22 08:46:21.296260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:46.313 [2024-11-22 08:46:21.296277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:46.313 [2024-11-22 08:46:21.296288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:46.313 [2024-11-22 08:46:21.296318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:46.313 [2024-11-22 08:46:21.296346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.313 [2024-11-22 08:46:21.296364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:46.313 [2024-11-22 08:46:21.296374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:46.313 [2024-11-22 08:46:21.296382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.313 [2024-11-22 08:46:21.296392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:46.313 [2024-11-22 08:46:21.296401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:46.313 [2024-11-22 08:46:21.296418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:46.313 [2024-11-22 08:46:21.296437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:46.313 [2024-11-22 08:46:21.296463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:46.313 [2024-11-22 08:46:21.296506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:46.313 [2024-11-22 08:46:21.296533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:46.313 [2024-11-22 08:46:21.296560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:46.313 [2024-11-22 08:46:21.296587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.313 [2024-11-22 08:46:21.296605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:46.313 [2024-11-22 08:46:21.296614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:46.313 [2024-11-22 08:46:21.296622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.313 [2024-11-22 08:46:21.296631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:46.313 [2024-11-22 08:46:21.296641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:46.313 [2024-11-22 08:46:21.296650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:46.313 [2024-11-22 08:46:21.296668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:46.313 [2024-11-22 08:46:21.296678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296687] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:46.313 [2024-11-22 08:46:21.296696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:46.313 [2024-11-22 08:46:21.296707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.313 [2024-11-22 08:46:21.296726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:46.313 [2024-11-22 08:46:21.296736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:46.313 [2024-11-22 08:46:21.296745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:46.313 [2024-11-22 08:46:21.296754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:46.313 [2024-11-22 08:46:21.296763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:46.313 [2024-11-22 08:46:21.296772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:46.313 [2024-11-22 08:46:21.296782] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:46.313 [2024-11-22 08:46:21.296794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.313 [2024-11-22 08:46:21.296806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:46.313 [2024-11-22 08:46:21.296816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:46.313 [2024-11-22 08:46:21.296826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:46.313 [2024-11-22 08:46:21.296835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:46.313 [2024-11-22 08:46:21.296845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:46.313 [2024-11-22 08:46:21.296855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:46.313 [2024-11-22 08:46:21.296865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:46.313 [2024-11-22 08:46:21.296875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:46.313 [2024-11-22 08:46:21.296886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:46.313 [2024-11-22 08:46:21.296897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:46.313 [2024-11-22 08:46:21.296907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:46.314 [2024-11-22 08:46:21.296918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:46.314 [2024-11-22 08:46:21.296928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:46.314 [2024-11-22 08:46:21.296938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:46.314 [2024-11-22 08:46:21.296948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:46.314 [2024-11-22 08:46:21.296962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.314 [2024-11-22 08:46:21.296986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:46.314 [2024-11-22 08:46:21.296997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:46.314 [2024-11-22 08:46:21.297008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:46.314 [2024-11-22 08:46:21.297019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:46.314 [2024-11-22 08:46:21.297031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.314 [2024-11-22 08:46:21.297041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:46.314 [2024-11-22 08:46:21.297052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:25:46.314 [2024-11-22 08:46:21.297062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.314 [2024-11-22 08:46:21.331357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.314 [2024-11-22 08:46:21.331393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.314 [2024-11-22 08:46:21.331406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.307 ms 00:25:46.314 [2024-11-22 08:46:21.331432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.314 [2024-11-22 08:46:21.331509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.314 [2024-11-22 08:46:21.331520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.314 [2024-11-22 08:46:21.331531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:46.314 [2024-11-22 08:46:21.331541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.403788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.403827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.574 [2024-11-22 08:46:21.403841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.312 ms 00:25:46.574 [2024-11-22 08:46:21.403851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.403889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.403900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.574 [2024-11-22 08:46:21.403910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:46.574 [2024-11-22 08:46:21.403924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.404462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.404478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.574 [2024-11-22 08:46:21.404490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:25:46.574 [2024-11-22 08:46:21.404500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.404615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.404635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.574 [2024-11-22 08:46:21.404646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:46.574 [2024-11-22 08:46:21.404662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.422846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.422881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.574 [2024-11-22 08:46:21.422898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.192 ms 00:25:46.574 [2024-11-22 08:46:21.422909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.441293] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:46.574 [2024-11-22 08:46:21.441334] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:46.574 [2024-11-22 08:46:21.441349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.441359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:46.574 [2024-11-22 08:46:21.441370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.338 ms 00:25:46.574 [2024-11-22 08:46:21.441379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.469278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.469447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:46.574 [2024-11-22 08:46:21.469468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.902 ms 00:25:46.574 [2024-11-22 08:46:21.469479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.486942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.486985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:46.574 [2024-11-22 08:46:21.486998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.451 ms 00:25:46.574 [2024-11-22 08:46:21.487008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.504448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.504483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:46.574 [2024-11-22 08:46:21.504494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.430 ms 00:25:46.574 [2024-11-22 08:46:21.504503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.505197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.505225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:46.574 [2024-11-22 08:46:21.505237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:25:46.574 [2024-11-22 08:46:21.505250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.585021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.585083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:46.574 [2024-11-22 08:46:21.585120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.880 ms 00:25:46.574 [2024-11-22 08:46:21.585130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.595375] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:46.574 [2024-11-22 08:46:21.597636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.597665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:46.574 [2024-11-22 08:46:21.597678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.479 ms 00:25:46.574 [2024-11-22 08:46:21.597688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.597761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.597774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:46.574 [2024-11-22 08:46:21.597784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.574 [2024-11-22 08:46:21.597798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.597866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.597878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:46.574 [2024-11-22 08:46:21.597888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:46.574 [2024-11-22 08:46:21.597897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.597917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.597927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:46.574 [2024-11-22 08:46:21.597937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:46.574 [2024-11-22 08:46:21.597946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.598016] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:46.574 [2024-11-22 08:46:21.598032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.598042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:46.574 [2024-11-22 08:46:21.598053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:46.574 [2024-11-22 08:46:21.598062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.632197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.632236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:46.574 [2024-11-22 08:46:21.632250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.168 ms 00:25:46.574 [2024-11-22 08:46:21.632266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.632342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.574 [2024-11-22 08:46:21.632354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:46.574 [2024-11-22 08:46:21.632364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:46.574 [2024-11-22 08:46:21.632374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.574 [2024-11-22 08:46:21.633463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.381 ms, result 0 00:25:47.953  [2024-11-22T08:46:23.979Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-22T08:46:24.917Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-22T08:46:25.853Z] Copying: 72/1024 [MB] (23 MBps) [2024-11-22T08:46:26.792Z] Copying: 93/1024 [MB] (21 MBps) [2024-11-22T08:46:27.729Z] Copying: 118/1024 [MB] (24 MBps) [2024-11-22T08:46:28.666Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-22T08:46:30.042Z] Copying: 166/1024 [MB] (24 MBps) [2024-11-22T08:46:30.667Z] Copying: 191/1024 [MB] (24 MBps) [2024-11-22T08:46:32.045Z] Copying: 215/1024 [MB] (24 MBps) [2024-11-22T08:46:32.981Z] Copying: 239/1024 [MB] (24 MBps) [2024-11-22T08:46:33.918Z] Copying: 264/1024 [MB] (24 MBps) [2024-11-22T08:46:34.855Z] Copying: 288/1024 [MB] (24 MBps) [2024-11-22T08:46:35.791Z] Copying: 312/1024 [MB] (23 MBps) [2024-11-22T08:46:36.728Z] Copying: 336/1024 [MB] (23 MBps) [2024-11-22T08:46:37.663Z] Copying: 360/1024 [MB] (24 MBps) [2024-11-22T08:46:39.042Z] Copying: 384/1024 [MB] (24 MBps) [2024-11-22T08:46:39.979Z] Copying: 408/1024 [MB] (23 MBps) [2024-11-22T08:46:40.917Z] Copying: 432/1024 [MB] (23 MBps) [2024-11-22T08:46:41.854Z] Copying: 456/1024 [MB] (24 MBps) [2024-11-22T08:46:42.792Z] Copying: 480/1024 [MB] (24 MBps) [2024-11-22T08:46:43.731Z] Copying: 504/1024 [MB] (23 MBps) [2024-11-22T08:46:44.668Z] Copying: 529/1024 [MB] (24 MBps) [2024-11-22T08:46:46.045Z] Copying: 553/1024 [MB] (23 MBps) [2024-11-22T08:46:46.610Z] Copying: 577/1024 [MB] (24 MBps) [2024-11-22T08:46:47.987Z] Copying: 601/1024 [MB] (23 MBps) [2024-11-22T08:46:48.925Z] Copying: 625/1024 [MB] (24 MBps) [2024-11-22T08:46:49.861Z] Copying: 649/1024 [MB] (24 MBps) [2024-11-22T08:46:50.801Z] Copying: 674/1024 [MB] (24 MBps) [2024-11-22T08:46:51.738Z] Copying: 698/1024 [MB] (24 MBps) [2024-11-22T08:46:52.675Z] Copying: 722/1024 [MB] (24 MBps) [2024-11-22T08:46:53.612Z] Copying: 747/1024 [MB] (24 MBps) [2024-11-22T08:46:54.991Z] Copying: 771/1024 [MB] (24 MBps) [2024-11-22T08:46:55.927Z] Copying: 796/1024 [MB] (24 MBps) [2024-11-22T08:46:56.913Z] Copying: 821/1024 [MB] (25 MBps) [2024-11-22T08:46:57.849Z] Copying: 845/1024 [MB] (24 MBps) [2024-11-22T08:46:58.787Z] Copying: 871/1024 [MB] (25 MBps) [2024-11-22T08:46:59.744Z] Copying: 896/1024 [MB] (25 MBps) [2024-11-22T08:47:00.682Z] Copying: 922/1024 [MB] (25 MBps) [2024-11-22T08:47:01.619Z] Copying: 948/1024 [MB] (25 MBps) [2024-11-22T08:47:02.998Z] Copying: 973/1024 [MB] (25 MBps) [2024-11-22T08:47:03.934Z] Copying: 999/1024 [MB] (25 MBps) [2024-11-22T08:47:04.502Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-22T08:47:04.502Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-22 08:47:04.319080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.415 [2024-11-22 08:47:04.319142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:29.415 [2024-11-22 08:47:04.319158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:29.415 [2024-11-22 08:47:04.319193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.415 [2024-11-22 08:47:04.320798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:29.415 [2024-11-22 08:47:04.325925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.415 [2024-11-22 08:47:04.325978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:29.415 [2024-11-22 08:47:04.325993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:26:29.415 [2024-11-22 08:47:04.326003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.415 [2024-11-22 08:47:04.337024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.415 [2024-11-22 08:47:04.337208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:29.416 [2024-11-22 08:47:04.337231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.232 ms 00:26:29.416 [2024-11-22 08:47:04.337242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.416 [2024-11-22 08:47:04.360474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.416 [2024-11-22 08:47:04.360533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:29.416 [2024-11-22 08:47:04.360550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.237 ms 00:26:29.416 [2024-11-22 08:47:04.360562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.416 [2024-11-22 08:47:04.365492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.416 [2024-11-22 08:47:04.365526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:29.416 [2024-11-22 08:47:04.365537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.903 ms 00:26:29.416 [2024-11-22 08:47:04.365547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.416 [2024-11-22 08:47:04.400652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.416 [2024-11-22 08:47:04.400690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:29.416 [2024-11-22 08:47:04.400703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.101 ms 00:26:29.416 [2024-11-22 08:47:04.400713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.416 [2024-11-22 08:47:04.421472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.416 [2024-11-22 08:47:04.421628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:29.416 [2024-11-22 08:47:04.421648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:26:29.416 [2024-11-22 08:47:04.421659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.547423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.676 [2024-11-22 08:47:04.547466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.676 [2024-11-22 08:47:04.547481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.912 ms 00:26:29.676 [2024-11-22 08:47:04.547492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.582519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.676 [2024-11-22 08:47:04.582556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:29.676 [2024-11-22 08:47:04.582570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.067 ms 00:26:29.676 [2024-11-22 08:47:04.582579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.617851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.676 [2024-11-22 08:47:04.617898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.676 [2024-11-22 08:47:04.617911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.292 ms 00:26:29.676 [2024-11-22 08:47:04.617921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.652663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.676 [2024-11-22 08:47:04.652825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.676 [2024-11-22 08:47:04.652847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.752 ms 00:26:29.676 [2024-11-22 08:47:04.652857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.687703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.676 [2024-11-22 08:47:04.687841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.676 [2024-11-22 08:47:04.687862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.824 ms 00:26:29.676 [2024-11-22 08:47:04.687872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.676 [2024-11-22 08:47:04.687909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.676 [2024-11-22 08:47:04.687925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:26:29.676 [2024-11-22 08:47:04.687939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.687951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.687984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.687995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.676 [2024-11-22 08:47:04.688081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.688991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.689001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.689012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.677 [2024-11-22 08:47:04.689031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.677 [2024-11-22 08:47:04.689041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:26:29.678 [2024-11-22 08:47:04.689052] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:26:29.678 [2024-11-22 08:47:04.689062] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:26:29.678 [2024-11-22 08:47:04.689072] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:26:29.678 [2024-11-22 08:47:04.689082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:26:29.678 [2024-11-22 08:47:04.689092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.678 [2024-11-22 08:47:04.689108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.678 [2024-11-22 08:47:04.689127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.678 [2024-11-22 08:47:04.689137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.678 [2024-11-22 08:47:04.689146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.678 [2024-11-22 08:47:04.689157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.678 [2024-11-22 08:47:04.689167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.678 [2024-11-22 08:47:04.689177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:26:29.678 [2024-11-22 08:47:04.689187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.678 [2024-11-22 08:47:04.708806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.678 [2024-11-22 08:47:04.708840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.678 [2024-11-22 08:47:04.708852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.604 ms 00:26:29.678 [2024-11-22 08:47:04.708867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.678 [2024-11-22 08:47:04.709460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.678 [2024-11-22 08:47:04.709482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.678 [2024-11-22 08:47:04.709494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:26:29.678 [2024-11-22 08:47:04.709504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.759045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.759093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.937 [2024-11-22 08:47:04.759111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.759121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.759173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.759183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.937 [2024-11-22 08:47:04.759193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.759203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.759263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.759277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.937 [2024-11-22 08:47:04.759287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.759301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.759317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.759327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.937 [2024-11-22 08:47:04.759337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.759346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.874199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.874436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.937 [2024-11-22 08:47:04.874464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.874475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.967852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.967896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.937 [2024-11-22 08:47:04.967909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.967919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.968042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.968055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.937 [2024-11-22 08:47:04.968067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.937 [2024-11-22 08:47:04.968076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.937 [2024-11-22 08:47:04.968117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.937 [2024-11-22 08:47:04.968144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.938 [2024-11-22 08:47:04.968155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.938 [2024-11-22 08:47:04.968165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.938 [2024-11-22 08:47:04.968278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.938 [2024-11-22 08:47:04.968291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.938 [2024-11-22 08:47:04.968302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.938 [2024-11-22 08:47:04.968312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.938 [2024-11-22 08:47:04.968350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.938 [2024-11-22 08:47:04.968362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.938 [2024-11-22 08:47:04.968373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.938 [2024-11-22 08:47:04.968382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.938 [2024-11-22 08:47:04.968418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.938 [2024-11-22 08:47:04.968429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.938 [2024-11-22 08:47:04.968440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.938 [2024-11-22 08:47:04.968449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.938 [2024-11-22 08:47:04.968493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.938 [2024-11-22 08:47:04.968504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.938 [2024-11-22 08:47:04.968515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.938 [2024-11-22 08:47:04.968525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.938 [2024-11-22 08:47:04.968640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.024 ms, result 0 00:26:31.843 00:26:31.843 00:26:31.843 08:47:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:31.843 [2024-11-22 08:47:06.814227] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:26:31.843 [2024-11-22 08:47:06.814348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80480 ] 00:26:32.100 [2024-11-22 08:47:06.993043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.100 [2024-11-22 08:47:07.110411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.668 [2024-11-22 08:47:07.447289] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:32.668 [2024-11-22 08:47:07.447356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:32.668 [2024-11-22 08:47:07.607468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.668 [2024-11-22 08:47:07.607709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:32.668 [2024-11-22 08:47:07.607741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:32.668 [2024-11-22 08:47:07.607751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.668 [2024-11-22 08:47:07.607809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.668 [2024-11-22 08:47:07.607823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.668 [2024-11-22 08:47:07.607837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:32.668 [2024-11-22 08:47:07.607846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.607869] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:32.669 [2024-11-22 08:47:07.608858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:32.669 [2024-11-22 08:47:07.608882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.608894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.669 [2024-11-22 08:47:07.608904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:26:32.669 [2024-11-22 08:47:07.608914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.610366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:32.669 [2024-11-22 08:47:07.629078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.629117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:32.669 [2024-11-22 08:47:07.629131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.743 ms 00:26:32.669 [2024-11-22 08:47:07.629142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.629206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.629218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:32.669 [2024-11-22 08:47:07.629229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:32.669 [2024-11-22 08:47:07.629239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.636197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.636356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.669 [2024-11-22 08:47:07.636477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:26:32.669 [2024-11-22 08:47:07.636515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.636623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.636716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.669 [2024-11-22 08:47:07.636753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:32.669 [2024-11-22 08:47:07.636784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.636895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.636935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:32.669 [2024-11-22 08:47:07.637132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:32.669 [2024-11-22 08:47:07.637169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.637222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:32.669 [2024-11-22 08:47:07.642256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.642418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.669 [2024-11-22 08:47:07.642544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.049 ms 00:26:32.669 [2024-11-22 08:47:07.642567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.642608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.642619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:32.669 [2024-11-22 08:47:07.642630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:32.669 [2024-11-22 08:47:07.642649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.642703] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:32.669 [2024-11-22 08:47:07.642727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:32.669 [2024-11-22 08:47:07.642761] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:32.669 [2024-11-22 08:47:07.642782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:32.669 [2024-11-22 08:47:07.642870] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:32.669 [2024-11-22 08:47:07.642883] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:32.669 [2024-11-22 08:47:07.642896] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:32.669 [2024-11-22 08:47:07.642909] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:32.669 [2024-11-22 08:47:07.642921] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:32.669 [2024-11-22 08:47:07.642933] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:32.669 [2024-11-22 08:47:07.642943] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:32.669 [2024-11-22 08:47:07.642953] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:32.669 [2024-11-22 08:47:07.642983] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:32.669 [2024-11-22 08:47:07.642997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.643007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:32.669 [2024-11-22 08:47:07.643018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:32.669 [2024-11-22 08:47:07.643028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.643103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.669 [2024-11-22 08:47:07.643114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:32.669 [2024-11-22 08:47:07.643124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:32.669 [2024-11-22 08:47:07.643134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.669 [2024-11-22 08:47:07.643225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:32.669 [2024-11-22 08:47:07.643243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:32.669 [2024-11-22 08:47:07.643253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:32.669 [2024-11-22 08:47:07.643283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:32.669 [2024-11-22 08:47:07.643312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.669 [2024-11-22 08:47:07.643330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:32.669 [2024-11-22 08:47:07.643340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:32.669 [2024-11-22 08:47:07.643349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.669 [2024-11-22 08:47:07.643358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:32.669 [2024-11-22 08:47:07.643368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:32.669 [2024-11-22 08:47:07.643386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:32.669 [2024-11-22 08:47:07.643404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:32.669 [2024-11-22 08:47:07.643432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:32.669 [2024-11-22 08:47:07.643460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:32.669 [2024-11-22 08:47:07.643487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:32.669 [2024-11-22 08:47:07.643514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.669 [2024-11-22 08:47:07.643532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:32.669 [2024-11-22 08:47:07.643541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.669 [2024-11-22 08:47:07.643559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:32.669 [2024-11-22 08:47:07.643568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:32.669 [2024-11-22 08:47:07.643577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.669 [2024-11-22 08:47:07.643586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:32.669 [2024-11-22 08:47:07.643595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:32.669 [2024-11-22 08:47:07.643604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:32.669 [2024-11-22 08:47:07.643621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:32.669 [2024-11-22 08:47:07.643630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.669 [2024-11-22 08:47:07.643640] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:32.670 [2024-11-22 08:47:07.643651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:32.670 [2024-11-22 08:47:07.643660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.670 [2024-11-22 08:47:07.643670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.670 [2024-11-22 08:47:07.643680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:32.670 [2024-11-22 08:47:07.643691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:32.670 [2024-11-22 08:47:07.643700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:32.670 [2024-11-22 08:47:07.643709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:32.670 [2024-11-22 08:47:07.643718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:32.670 [2024-11-22 08:47:07.643727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:32.670 [2024-11-22 08:47:07.643737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:32.670 [2024-11-22 08:47:07.643749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:32.670 [2024-11-22 08:47:07.643770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:32.670 [2024-11-22 08:47:07.643779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:32.670 [2024-11-22 08:47:07.643791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:32.670 [2024-11-22 08:47:07.643801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:32.670 [2024-11-22 08:47:07.643811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:32.670 [2024-11-22 08:47:07.643822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:32.670 [2024-11-22 08:47:07.643832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:32.670 [2024-11-22 08:47:07.643842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:32.670 [2024-11-22 08:47:07.643852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:32.670 [2024-11-22 08:47:07.643901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:32.670 [2024-11-22 08:47:07.643915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:32.670 [2024-11-22 08:47:07.643936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:32.670 [2024-11-22 08:47:07.643946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:32.670 [2024-11-22 08:47:07.643967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:32.670 [2024-11-22 08:47:07.643978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.643988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:32.670 [2024-11-22 08:47:07.643998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:26:32.670 [2024-11-22 08:47:07.644008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.683386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.683427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.670 [2024-11-22 08:47:07.683441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.397 ms 00:26:32.670 [2024-11-22 08:47:07.683452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.683533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.683544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:32.670 [2024-11-22 08:47:07.683554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:32.670 [2024-11-22 08:47:07.683565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.739085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.739123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.670 [2024-11-22 08:47:07.739137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.554 ms 00:26:32.670 [2024-11-22 08:47:07.739147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.739181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.739192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.670 [2024-11-22 08:47:07.739203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:32.670 [2024-11-22 08:47:07.739217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.739690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.739703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.670 [2024-11-22 08:47:07.739714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:26:32.670 [2024-11-22 08:47:07.739723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.670 [2024-11-22 08:47:07.739835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.670 [2024-11-22 08:47:07.739848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.670 [2024-11-22 08:47:07.739859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:32.670 [2024-11-22 08:47:07.739875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.930 [2024-11-22 08:47:07.757749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.930 [2024-11-22 08:47:07.757785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.930 [2024-11-22 08:47:07.757801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.883 ms 00:26:32.930 [2024-11-22 08:47:07.757827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.930 [2024-11-22 08:47:07.776753] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:32.930 [2024-11-22 08:47:07.776794] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:32.930 [2024-11-22 08:47:07.776810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.930 [2024-11-22 08:47:07.776820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:32.930 [2024-11-22 08:47:07.776832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.914 ms 00:26:32.930 [2024-11-22 08:47:07.776841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.930 [2024-11-22 08:47:07.806513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.930 [2024-11-22 08:47:07.806558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:32.930 [2024-11-22 08:47:07.806572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.678 ms 00:26:32.930 [2024-11-22 08:47:07.806582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.930 [2024-11-22 08:47:07.824432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.930 [2024-11-22 08:47:07.824479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:32.930 [2024-11-22 08:47:07.824492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.825 ms 00:26:32.930 [2024-11-22 08:47:07.824516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.930 [2024-11-22 08:47:07.842884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.930 [2024-11-22 08:47:07.843043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:32.930 [2024-11-22 08:47:07.843064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.359 ms 00:26:32.931 [2024-11-22 08:47:07.843075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.843859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.843885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:32.931 [2024-11-22 08:47:07.843897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:26:32.931 [2024-11-22 08:47:07.843911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.927639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.927702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:32.931 [2024-11-22 08:47:07.927739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.843 ms 00:26:32.931 [2024-11-22 08:47:07.927750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.938218] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:32.931 [2024-11-22 08:47:07.940691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.940720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:32.931 [2024-11-22 08:47:07.940733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.894 ms 00:26:32.931 [2024-11-22 08:47:07.940743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.940836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.940849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:32.931 [2024-11-22 08:47:07.940859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:32.931 [2024-11-22 08:47:07.940873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.942385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.942423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:32.931 [2024-11-22 08:47:07.942435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:26:32.931 [2024-11-22 08:47:07.942445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.942473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.942484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:32.931 [2024-11-22 08:47:07.942495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:32.931 [2024-11-22 08:47:07.942505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.942543] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:32.931 [2024-11-22 08:47:07.942560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.942570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:32.931 [2024-11-22 08:47:07.942580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:32.931 [2024-11-22 08:47:07.942590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.977398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.977436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:32.931 [2024-11-22 08:47:07.977449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.846 ms 00:26:32.931 [2024-11-22 08:47:07.977464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.977534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.931 [2024-11-22 08:47:07.977546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:32.931 [2024-11-22 08:47:07.977556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:32.931 [2024-11-22 08:47:07.977565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.931 [2024-11-22 08:47:07.978617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.336 ms, result 0 00:26:34.310  [2024-11-22T08:47:10.334Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-22T08:47:11.271Z] Copying: 47/1024 [MB] (25 MBps) [2024-11-22T08:47:12.209Z] Copying: 72/1024 [MB] (25 MBps) [2024-11-22T08:47:13.588Z] Copying: 98/1024 [MB] (25 MBps) [2024-11-22T08:47:14.525Z] Copying: 124/1024 [MB] (25 MBps) [2024-11-22T08:47:15.462Z] Copying: 149/1024 [MB] (25 MBps) [2024-11-22T08:47:16.399Z] Copying: 174/1024 [MB] (25 MBps) [2024-11-22T08:47:17.336Z] Copying: 199/1024 [MB] (24 MBps) [2024-11-22T08:47:18.275Z] Copying: 224/1024 [MB] (25 MBps) [2024-11-22T08:47:19.211Z] Copying: 249/1024 [MB] (24 MBps) [2024-11-22T08:47:20.590Z] Copying: 274/1024 [MB] (25 MBps) [2024-11-22T08:47:21.528Z] Copying: 299/1024 [MB] (24 MBps) [2024-11-22T08:47:22.466Z] Copying: 323/1024 [MB] (24 MBps) [2024-11-22T08:47:23.404Z] Copying: 348/1024 [MB] (24 MBps) [2024-11-22T08:47:24.341Z] Copying: 373/1024 [MB] (24 MBps) [2024-11-22T08:47:25.280Z] Copying: 397/1024 [MB] (24 MBps) [2024-11-22T08:47:26.271Z] Copying: 422/1024 [MB] (24 MBps) [2024-11-22T08:47:27.208Z] Copying: 446/1024 [MB] (24 MBps) [2024-11-22T08:47:28.587Z] Copying: 471/1024 [MB] (24 MBps) [2024-11-22T08:47:29.524Z] Copying: 495/1024 [MB] (24 MBps) [2024-11-22T08:47:30.462Z] Copying: 520/1024 [MB] (24 MBps) [2024-11-22T08:47:31.400Z] Copying: 545/1024 [MB] (24 MBps) [2024-11-22T08:47:32.338Z] Copying: 570/1024 [MB] (24 MBps) [2024-11-22T08:47:33.276Z] Copying: 595/1024 [MB] (25 MBps) [2024-11-22T08:47:34.216Z] Copying: 621/1024 [MB] (25 MBps) [2024-11-22T08:47:35.596Z] Copying: 646/1024 [MB] (25 MBps) [2024-11-22T08:47:36.164Z] Copying: 671/1024 [MB] (25 MBps) [2024-11-22T08:47:37.541Z] Copying: 696/1024 [MB] (25 MBps) [2024-11-22T08:47:38.493Z] Copying: 721/1024 [MB] (25 MBps) [2024-11-22T08:47:39.439Z] Copying: 747/1024 [MB] (25 MBps) [2024-11-22T08:47:40.376Z] Copying: 773/1024 [MB] (25 MBps) [2024-11-22T08:47:41.313Z] Copying: 800/1024 [MB] (26 MBps) [2024-11-22T08:47:42.252Z] Copying: 826/1024 [MB] (26 MBps) [2024-11-22T08:47:43.189Z] Copying: 852/1024 [MB] (25 MBps) [2024-11-22T08:47:44.567Z] Copying: 878/1024 [MB] (26 MBps) [2024-11-22T08:47:45.503Z] Copying: 905/1024 [MB] (26 MBps) [2024-11-22T08:47:46.440Z] Copying: 930/1024 [MB] (25 MBps) [2024-11-22T08:47:47.377Z] Copying: 957/1024 [MB] (26 MBps) [2024-11-22T08:47:48.315Z] Copying: 984/1024 [MB] (27 MBps) [2024-11-22T08:47:48.884Z] Copying: 1010/1024 [MB] (26 MBps) [2024-11-22T08:47:48.884Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-22 08:47:48.663982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-11-22 08:47:48.664240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:13.797 [2024-11-22 08:47:48.664359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:13.797 [2024-11-22 08:47:48.664409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-11-22 08:47:48.664488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:13.797 [2024-11-22 08:47:48.669924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-11-22 08:47:48.670072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:13.797 [2024-11-22 08:47:48.670383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.304 ms 00:27:13.797 [2024-11-22 08:47:48.670400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-11-22 08:47:48.670595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-11-22 08:47:48.670609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:13.797 [2024-11-22 08:47:48.670625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:27:13.797 [2024-11-22 08:47:48.670635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-11-22 08:47:48.675138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-11-22 08:47:48.675173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:13.797 [2024-11-22 08:47:48.675187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:27:13.797 [2024-11-22 08:47:48.675198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.797 [2024-11-22 08:47:48.680147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.797 [2024-11-22 08:47:48.680181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:13.797 [2024-11-22 08:47:48.680192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:27:13.798 [2024-11-22 08:47:48.680202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.798 [2024-11-22 08:47:48.715209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.798 [2024-11-22 08:47:48.715246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:13.798 [2024-11-22 08:47:48.715259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.019 ms 00:27:13.798 [2024-11-22 08:47:48.715285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.798 [2024-11-22 08:47:48.735919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.798 [2024-11-22 08:47:48.735974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:13.798 [2024-11-22 08:47:48.735988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.631 ms 00:27:13.798 [2024-11-22 08:47:48.735998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-11-22 08:47:48.879566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-11-22 08:47:48.879611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:14.059 [2024-11-22 08:47:48.879626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.741 ms 00:27:14.059 [2024-11-22 08:47:48.879637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-11-22 08:47:48.916148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-11-22 08:47:48.916187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:14.059 [2024-11-22 08:47:48.916200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.553 ms 00:27:14.059 [2024-11-22 08:47:48.916210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-11-22 08:47:48.950756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-11-22 08:47:48.950793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:14.059 [2024-11-22 08:47:48.950818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.563 ms 00:27:14.059 [2024-11-22 08:47:48.950829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.059 [2024-11-22 08:47:48.984462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.059 [2024-11-22 08:47:48.984498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:14.059 [2024-11-22 08:47:48.984510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.654 ms 00:27:14.060 [2024-11-22 08:47:48.984535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.060 [2024-11-22 08:47:49.017812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.060 [2024-11-22 08:47:49.017847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:14.060 [2024-11-22 08:47:49.017859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.258 ms 00:27:14.060 [2024-11-22 08:47:49.017868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.060 [2024-11-22 08:47:49.017902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:14.060 [2024-11-22 08:47:49.017917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:14.060 [2024-11-22 08:47:49.017929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.017939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.017949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.017971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.017998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:14.060 [2024-11-22 08:47:49.018787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.018990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.019000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.019011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:14.061 [2024-11-22 08:47:49.019028] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:14.061 [2024-11-22 08:47:49.019038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cf9305f-5939-4b7e-b9bf-6af33c4f18fe 00:27:14.061 [2024-11-22 08:47:49.019049] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:14.061 [2024-11-22 08:47:49.019059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 23744 00:27:14.061 [2024-11-22 08:47:49.019069] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 22784 00:27:14.061 [2024-11-22 08:47:49.019079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0421 00:27:14.061 [2024-11-22 08:47:49.019089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:14.061 [2024-11-22 08:47:49.019104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:14.061 [2024-11-22 08:47:49.019114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:14.061 [2024-11-22 08:47:49.019133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:14.061 [2024-11-22 08:47:49.019143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:14.061 [2024-11-22 08:47:49.019152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.061 [2024-11-22 08:47:49.019162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:14.061 [2024-11-22 08:47:49.019173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:27:14.061 [2024-11-22 08:47:49.019183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.038260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.061 [2024-11-22 08:47:49.038291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:14.061 [2024-11-22 08:47:49.038303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.074 ms 00:27:14.061 [2024-11-22 08:47:49.038318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.038842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.061 [2024-11-22 08:47:49.038853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:14.061 [2024-11-22 08:47:49.038863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:27:14.061 [2024-11-22 08:47:49.038873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.090217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.061 [2024-11-22 08:47:49.090254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.061 [2024-11-22 08:47:49.090272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.061 [2024-11-22 08:47:49.090299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.090351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.061 [2024-11-22 08:47:49.090362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.061 [2024-11-22 08:47:49.090373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.061 [2024-11-22 08:47:49.090383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.090447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.061 [2024-11-22 08:47:49.090461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.061 [2024-11-22 08:47:49.090471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.061 [2024-11-22 08:47:49.090485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.061 [2024-11-22 08:47:49.090502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.061 [2024-11-22 08:47:49.090512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.061 [2024-11-22 08:47:49.090522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.061 [2024-11-22 08:47:49.090532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.207105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.207314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.320 [2024-11-22 08:47:49.207359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.207370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.320 [2024-11-22 08:47:49.301414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.320 [2024-11-22 08:47:49.301534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.320 [2024-11-22 08:47:49.301603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.320 [2024-11-22 08:47:49.301742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:14.320 [2024-11-22 08:47:49.301808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.320 [2024-11-22 08:47:49.301873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.301922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.320 [2024-11-22 08:47:49.301933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.320 [2024-11-22 08:47:49.301942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.320 [2024-11-22 08:47:49.301952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.320 [2024-11-22 08:47:49.302166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.150 ms, result 0 00:27:15.308 00:27:15.308 00:27:15.308 08:47:50 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:17.215 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:17.215 08:47:51 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:17.215 08:47:51 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:17.215 08:47:51 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:17.215 Process with pid 78872 is not found 00:27:17.215 Remove shared memory files 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78872 00:27:17.215 08:47:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78872 ']' 00:27:17.215 08:47:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78872 00:27:17.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78872) - No such process 00:27:17.215 08:47:52 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78872 is not found' 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:17.215 08:47:52 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:17.215 ************************************ 00:27:17.215 END TEST ftl_restore 00:27:17.215 ************************************ 00:27:17.215 00:27:17.215 real 3m21.304s 00:27:17.215 user 3m9.412s 00:27:17.215 sys 0m13.205s 00:27:17.215 08:47:52 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.215 08:47:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:17.215 08:47:52 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:17.215 08:47:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:17.215 08:47:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.215 08:47:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:17.215 ************************************ 00:27:17.215 START TEST ftl_dirty_shutdown 00:27:17.215 ************************************ 00:27:17.215 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:17.476 * Looking for test storage... 00:27:17.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.476 --rc genhtml_branch_coverage=1 00:27:17.476 --rc genhtml_function_coverage=1 00:27:17.476 --rc genhtml_legend=1 00:27:17.476 --rc geninfo_all_blocks=1 00:27:17.476 --rc geninfo_unexecuted_blocks=1 00:27:17.476 00:27:17.476 ' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.476 --rc genhtml_branch_coverage=1 00:27:17.476 --rc genhtml_function_coverage=1 00:27:17.476 --rc genhtml_legend=1 00:27:17.476 --rc geninfo_all_blocks=1 00:27:17.476 --rc geninfo_unexecuted_blocks=1 00:27:17.476 00:27:17.476 ' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.476 --rc genhtml_branch_coverage=1 00:27:17.476 --rc genhtml_function_coverage=1 00:27:17.476 --rc genhtml_legend=1 00:27:17.476 --rc geninfo_all_blocks=1 00:27:17.476 --rc geninfo_unexecuted_blocks=1 00:27:17.476 00:27:17.476 ' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.476 --rc genhtml_branch_coverage=1 00:27:17.476 --rc genhtml_function_coverage=1 00:27:17.476 --rc genhtml_legend=1 00:27:17.476 --rc geninfo_all_blocks=1 00:27:17.476 --rc geninfo_unexecuted_blocks=1 00:27:17.476 00:27:17.476 ' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81011 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81011 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81011 ']' 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.476 08:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:17.736 [2024-11-22 08:47:52.584058] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:27:17.736 [2024-11-22 08:47:52.584192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81011 ] 00:27:17.736 [2024-11-22 08:47:52.764286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.994 [2024-11-22 08:47:52.866661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:18.932 08:47:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:19.192 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:19.192 { 00:27:19.192 "name": "nvme0n1", 00:27:19.192 "aliases": [ 00:27:19.192 "86769569-8f36-43d6-ae2a-cb7f77da290d" 00:27:19.192 ], 00:27:19.192 "product_name": "NVMe disk", 00:27:19.192 "block_size": 4096, 00:27:19.192 "num_blocks": 1310720, 00:27:19.192 "uuid": "86769569-8f36-43d6-ae2a-cb7f77da290d", 00:27:19.192 "numa_id": -1, 00:27:19.192 "assigned_rate_limits": { 00:27:19.192 "rw_ios_per_sec": 0, 00:27:19.192 "rw_mbytes_per_sec": 0, 00:27:19.192 "r_mbytes_per_sec": 0, 00:27:19.192 "w_mbytes_per_sec": 0 00:27:19.192 }, 00:27:19.192 "claimed": true, 00:27:19.192 "claim_type": "read_many_write_one", 00:27:19.192 "zoned": false, 00:27:19.192 "supported_io_types": { 00:27:19.192 "read": true, 00:27:19.192 "write": true, 00:27:19.192 "unmap": true, 00:27:19.192 "flush": true, 00:27:19.192 "reset": true, 00:27:19.192 "nvme_admin": true, 00:27:19.192 "nvme_io": true, 00:27:19.192 "nvme_io_md": false, 00:27:19.192 "write_zeroes": true, 00:27:19.192 "zcopy": false, 00:27:19.192 "get_zone_info": false, 00:27:19.192 "zone_management": false, 00:27:19.192 "zone_append": false, 00:27:19.192 "compare": true, 00:27:19.192 "compare_and_write": false, 00:27:19.192 "abort": true, 00:27:19.192 "seek_hole": false, 00:27:19.192 "seek_data": false, 00:27:19.192 "copy": true, 00:27:19.192 "nvme_iov_md": false 00:27:19.192 }, 00:27:19.192 "driver_specific": { 00:27:19.192 "nvme": [ 00:27:19.192 { 00:27:19.192 "pci_address": "0000:00:11.0", 00:27:19.192 "trid": { 00:27:19.192 "trtype": "PCIe", 00:27:19.192 "traddr": "0000:00:11.0" 00:27:19.192 }, 00:27:19.192 "ctrlr_data": { 00:27:19.192 "cntlid": 0, 00:27:19.192 "vendor_id": "0x1b36", 00:27:19.192 "model_number": "QEMU NVMe Ctrl", 00:27:19.192 "serial_number": "12341", 00:27:19.192 "firmware_revision": "8.0.0", 00:27:19.192 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:19.192 "oacs": { 00:27:19.192 "security": 0, 00:27:19.192 "format": 1, 00:27:19.192 "firmware": 0, 00:27:19.192 "ns_manage": 1 00:27:19.192 }, 00:27:19.192 "multi_ctrlr": false, 00:27:19.192 "ana_reporting": false 00:27:19.192 }, 00:27:19.192 "vs": { 00:27:19.192 "nvme_version": "1.4" 00:27:19.192 }, 00:27:19.192 "ns_data": { 00:27:19.192 "id": 1, 00:27:19.192 "can_share": false 00:27:19.192 } 00:27:19.192 } 00:27:19.192 ], 00:27:19.192 "mp_policy": "active_passive" 00:27:19.192 } 00:27:19.192 } 00:27:19.193 ]' 00:27:19.193 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:19.193 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:19.193 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8dfe7d70-c2d4-434c-9c21-a1b75110141f 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:19.452 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8dfe7d70-c2d4-434c-9c21-a1b75110141f 00:27:19.711 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:19.970 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=71434ea2-1a09-4a3e-9cee-5be865f54b05 00:27:19.970 08:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 71434ea2-1a09-4a3e-9cee-5be865f54b05 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.228 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.228 { 00:27:20.228 "name": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:20.228 "aliases": [ 00:27:20.228 "lvs/nvme0n1p0" 00:27:20.228 ], 00:27:20.229 "product_name": "Logical Volume", 00:27:20.229 "block_size": 4096, 00:27:20.229 "num_blocks": 26476544, 00:27:20.229 "uuid": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:20.229 "assigned_rate_limits": { 00:27:20.229 "rw_ios_per_sec": 0, 00:27:20.229 "rw_mbytes_per_sec": 0, 00:27:20.229 "r_mbytes_per_sec": 0, 00:27:20.229 "w_mbytes_per_sec": 0 00:27:20.229 }, 00:27:20.229 "claimed": false, 00:27:20.229 "zoned": false, 00:27:20.229 "supported_io_types": { 00:27:20.229 "read": true, 00:27:20.229 "write": true, 00:27:20.229 "unmap": true, 00:27:20.229 "flush": false, 00:27:20.229 "reset": true, 00:27:20.229 "nvme_admin": false, 00:27:20.229 "nvme_io": false, 00:27:20.229 "nvme_io_md": false, 00:27:20.229 "write_zeroes": true, 00:27:20.229 "zcopy": false, 00:27:20.229 "get_zone_info": false, 00:27:20.229 "zone_management": false, 00:27:20.229 "zone_append": false, 00:27:20.229 "compare": false, 00:27:20.229 "compare_and_write": false, 00:27:20.229 "abort": false, 00:27:20.229 "seek_hole": true, 00:27:20.229 "seek_data": true, 00:27:20.229 "copy": false, 00:27:20.229 "nvme_iov_md": false 00:27:20.229 }, 00:27:20.229 "driver_specific": { 00:27:20.229 "lvol": { 00:27:20.229 "lvol_store_uuid": "71434ea2-1a09-4a3e-9cee-5be865f54b05", 00:27:20.229 "base_bdev": "nvme0n1", 00:27:20.229 "thin_provision": true, 00:27:20.229 "num_allocated_clusters": 0, 00:27:20.229 "snapshot": false, 00:27:20.229 "clone": false, 00:27:20.229 "esnap_clone": false 00:27:20.229 } 00:27:20.229 } 00:27:20.229 } 00:27:20.229 ]' 00:27:20.229 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:20.488 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.747 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:21.007 { 00:27:21.007 "name": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:21.007 "aliases": [ 00:27:21.007 "lvs/nvme0n1p0" 00:27:21.007 ], 00:27:21.007 "product_name": "Logical Volume", 00:27:21.007 "block_size": 4096, 00:27:21.007 "num_blocks": 26476544, 00:27:21.007 "uuid": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:21.007 "assigned_rate_limits": { 00:27:21.007 "rw_ios_per_sec": 0, 00:27:21.007 "rw_mbytes_per_sec": 0, 00:27:21.007 "r_mbytes_per_sec": 0, 00:27:21.007 "w_mbytes_per_sec": 0 00:27:21.007 }, 00:27:21.007 "claimed": false, 00:27:21.007 "zoned": false, 00:27:21.007 "supported_io_types": { 00:27:21.007 "read": true, 00:27:21.007 "write": true, 00:27:21.007 "unmap": true, 00:27:21.007 "flush": false, 00:27:21.007 "reset": true, 00:27:21.007 "nvme_admin": false, 00:27:21.007 "nvme_io": false, 00:27:21.007 "nvme_io_md": false, 00:27:21.007 "write_zeroes": true, 00:27:21.007 "zcopy": false, 00:27:21.007 "get_zone_info": false, 00:27:21.007 "zone_management": false, 00:27:21.007 "zone_append": false, 00:27:21.007 "compare": false, 00:27:21.007 "compare_and_write": false, 00:27:21.007 "abort": false, 00:27:21.007 "seek_hole": true, 00:27:21.007 "seek_data": true, 00:27:21.007 "copy": false, 00:27:21.007 "nvme_iov_md": false 00:27:21.007 }, 00:27:21.007 "driver_specific": { 00:27:21.007 "lvol": { 00:27:21.007 "lvol_store_uuid": "71434ea2-1a09-4a3e-9cee-5be865f54b05", 00:27:21.007 "base_bdev": "nvme0n1", 00:27:21.007 "thin_provision": true, 00:27:21.007 "num_allocated_clusters": 0, 00:27:21.007 "snapshot": false, 00:27:21.007 "clone": false, 00:27:21.007 "esnap_clone": false 00:27:21.007 } 00:27:21.007 } 00:27:21.007 } 00:27:21.007 ]' 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:21.007 08:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:21.266 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:21.266 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:21.266 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:21.266 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:21.266 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:21.267 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:21.267 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 00:27:21.267 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:21.267 { 00:27:21.267 "name": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:21.267 "aliases": [ 00:27:21.267 "lvs/nvme0n1p0" 00:27:21.267 ], 00:27:21.267 "product_name": "Logical Volume", 00:27:21.267 "block_size": 4096, 00:27:21.267 "num_blocks": 26476544, 00:27:21.267 "uuid": "5162e56e-d1b9-48fe-9dee-f22bc4fd41d1", 00:27:21.267 "assigned_rate_limits": { 00:27:21.267 "rw_ios_per_sec": 0, 00:27:21.267 "rw_mbytes_per_sec": 0, 00:27:21.267 "r_mbytes_per_sec": 0, 00:27:21.267 "w_mbytes_per_sec": 0 00:27:21.267 }, 00:27:21.267 "claimed": false, 00:27:21.267 "zoned": false, 00:27:21.267 "supported_io_types": { 00:27:21.267 "read": true, 00:27:21.267 "write": true, 00:27:21.267 "unmap": true, 00:27:21.267 "flush": false, 00:27:21.267 "reset": true, 00:27:21.267 "nvme_admin": false, 00:27:21.267 "nvme_io": false, 00:27:21.267 "nvme_io_md": false, 00:27:21.267 "write_zeroes": true, 00:27:21.267 "zcopy": false, 00:27:21.267 "get_zone_info": false, 00:27:21.267 "zone_management": false, 00:27:21.267 "zone_append": false, 00:27:21.267 "compare": false, 00:27:21.267 "compare_and_write": false, 00:27:21.267 "abort": false, 00:27:21.267 "seek_hole": true, 00:27:21.267 "seek_data": true, 00:27:21.267 "copy": false, 00:27:21.267 "nvme_iov_md": false 00:27:21.267 }, 00:27:21.267 "driver_specific": { 00:27:21.267 "lvol": { 00:27:21.267 "lvol_store_uuid": "71434ea2-1a09-4a3e-9cee-5be865f54b05", 00:27:21.267 "base_bdev": "nvme0n1", 00:27:21.267 "thin_provision": true, 00:27:21.267 "num_allocated_clusters": 0, 00:27:21.267 "snapshot": false, 00:27:21.267 "clone": false, 00:27:21.267 "esnap_clone": false 00:27:21.267 } 00:27:21.267 } 00:27:21.267 } 00:27:21.267 ]' 00:27:21.267 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 --l2p_dram_limit 10' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:21.527 08:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5162e56e-d1b9-48fe-9dee-f22bc4fd41d1 --l2p_dram_limit 10 -c nvc0n1p0 00:27:21.527 [2024-11-22 08:47:56.603595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.527 [2024-11-22 08:47:56.603646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.527 [2024-11-22 08:47:56.603666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.527 [2024-11-22 08:47:56.603677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.527 [2024-11-22 08:47:56.603738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.527 [2024-11-22 08:47:56.603750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.527 [2024-11-22 08:47:56.603764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:21.527 [2024-11-22 08:47:56.603774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.527 [2024-11-22 08:47:56.603803] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.527 [2024-11-22 08:47:56.604783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.527 [2024-11-22 08:47:56.604825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.527 [2024-11-22 08:47:56.604837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.527 [2024-11-22 08:47:56.604850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:27:21.527 [2024-11-22 08:47:56.604861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.527 [2024-11-22 08:47:56.604940] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 042fedb5-e077-4992-80f2-cab61d09911c 00:27:21.527 [2024-11-22 08:47:56.606376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.527 [2024-11-22 08:47:56.606544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:21.527 [2024-11-22 08:47:56.606565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:21.527 [2024-11-22 08:47:56.606579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.614189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.614219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.788 [2024-11-22 08:47:56.614234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.563 ms 00:27:21.788 [2024-11-22 08:47:56.614246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.614339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.614354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.788 [2024-11-22 08:47:56.614364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:21.788 [2024-11-22 08:47:56.614380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.614430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.614445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.788 [2024-11-22 08:47:56.614455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:21.788 [2024-11-22 08:47:56.614470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.614493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:21.788 [2024-11-22 08:47:56.619756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.619788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.788 [2024-11-22 08:47:56.619804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.275 ms 00:27:21.788 [2024-11-22 08:47:56.619814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.619849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.619860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.788 [2024-11-22 08:47:56.619873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:21.788 [2024-11-22 08:47:56.619893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.619935] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:21.788 [2024-11-22 08:47:56.620084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:21.788 [2024-11-22 08:47:56.620105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:21.788 [2024-11-22 08:47:56.620118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:21.788 [2024-11-22 08:47:56.620133] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:21.788 [2024-11-22 08:47:56.620145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:21.788 [2024-11-22 08:47:56.620158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:21.788 [2024-11-22 08:47:56.620168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:21.788 [2024-11-22 08:47:56.620183] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:21.788 [2024-11-22 08:47:56.620192] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:21.788 [2024-11-22 08:47:56.620205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.620215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:21.788 [2024-11-22 08:47:56.620228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:27:21.788 [2024-11-22 08:47:56.620247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.620323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.788 [2024-11-22 08:47:56.620333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:21.788 [2024-11-22 08:47:56.620346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:21.788 [2024-11-22 08:47:56.620356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.788 [2024-11-22 08:47:56.620465] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:21.788 [2024-11-22 08:47:56.620478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:21.788 [2024-11-22 08:47:56.620491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.788 [2024-11-22 08:47:56.620501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:21.788 [2024-11-22 08:47:56.620524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:21.788 [2024-11-22 08:47:56.620545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:21.788 [2024-11-22 08:47:56.620557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.788 [2024-11-22 08:47:56.620577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:21.788 [2024-11-22 08:47:56.620587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:21.788 [2024-11-22 08:47:56.620598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.788 [2024-11-22 08:47:56.620608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:21.788 [2024-11-22 08:47:56.620619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:21.788 [2024-11-22 08:47:56.620628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:21.788 [2024-11-22 08:47:56.620651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:21.788 [2024-11-22 08:47:56.620664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:21.788 [2024-11-22 08:47:56.620685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:21.788 [2024-11-22 08:47:56.620695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.789 [2024-11-22 08:47:56.620706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:21.789 [2024-11-22 08:47:56.620715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.789 [2024-11-22 08:47:56.620736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:21.789 [2024-11-22 08:47:56.620748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.789 [2024-11-22 08:47:56.620768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:21.789 [2024-11-22 08:47:56.620777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.789 [2024-11-22 08:47:56.620797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:21.789 [2024-11-22 08:47:56.620810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.789 [2024-11-22 08:47:56.620831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:21.789 [2024-11-22 08:47:56.620840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:21.789 [2024-11-22 08:47:56.620852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.789 [2024-11-22 08:47:56.620861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:21.789 [2024-11-22 08:47:56.620873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:21.789 [2024-11-22 08:47:56.620882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:21.789 [2024-11-22 08:47:56.620903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:21.789 [2024-11-22 08:47:56.620914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620923] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:21.789 [2024-11-22 08:47:56.620935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:21.789 [2024-11-22 08:47:56.620945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.789 [2024-11-22 08:47:56.620959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.789 [2024-11-22 08:47:56.620969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:21.789 [2024-11-22 08:47:56.620993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:21.789 [2024-11-22 08:47:56.621003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:21.789 [2024-11-22 08:47:56.621015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:21.789 [2024-11-22 08:47:56.621024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:21.789 [2024-11-22 08:47:56.621036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:21.789 [2024-11-22 08:47:56.621051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:21.789 [2024-11-22 08:47:56.621067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:21.789 [2024-11-22 08:47:56.621095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:21.789 [2024-11-22 08:47:56.621105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:21.789 [2024-11-22 08:47:56.621118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:21.789 [2024-11-22 08:47:56.621128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:21.789 [2024-11-22 08:47:56.621141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:21.789 [2024-11-22 08:47:56.621151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:21.789 [2024-11-22 08:47:56.621163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:21.789 [2024-11-22 08:47:56.621173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:21.789 [2024-11-22 08:47:56.621188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:21.789 [2024-11-22 08:47:56.621246] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:21.789 [2024-11-22 08:47:56.621259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.789 [2024-11-22 08:47:56.621284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:21.789 [2024-11-22 08:47:56.621295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:21.789 [2024-11-22 08:47:56.621308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:21.789 [2024-11-22 08:47:56.621318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.789 [2024-11-22 08:47:56.621331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:21.789 [2024-11-22 08:47:56.621341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:27:21.789 [2024-11-22 08:47:56.621353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.789 [2024-11-22 08:47:56.621392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:21.789 [2024-11-22 08:47:56.621410] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:25.985 [2024-11-22 08:48:00.220796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.221100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:25.986 [2024-11-22 08:48:00.221128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3605.244 ms 00:27:25.986 [2024-11-22 08:48:00.221143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.257743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.257793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.986 [2024-11-22 08:48:00.257809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.314 ms 00:27:25.986 [2024-11-22 08:48:00.257822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.257945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.257983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:25.986 [2024-11-22 08:48:00.257995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:25.986 [2024-11-22 08:48:00.258011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.301662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.301879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.986 [2024-11-22 08:48:00.301903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.647 ms 00:27:25.986 [2024-11-22 08:48:00.301919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.301971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.301990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.986 [2024-11-22 08:48:00.302001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:25.986 [2024-11-22 08:48:00.302013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.302504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.302523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.986 [2024-11-22 08:48:00.302535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:27:25.986 [2024-11-22 08:48:00.302547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.302643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.302669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.986 [2024-11-22 08:48:00.302682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:25.986 [2024-11-22 08:48:00.302697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.322213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.322256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.986 [2024-11-22 08:48:00.322270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.527 ms 00:27:25.986 [2024-11-22 08:48:00.322282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.334623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:25.986 [2024-11-22 08:48:00.337891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.337919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:25.986 [2024-11-22 08:48:00.337933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.553 ms 00:27:25.986 [2024-11-22 08:48:00.337943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.449444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.449495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:25.986 [2024-11-22 08:48:00.449513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.637 ms 00:27:25.986 [2024-11-22 08:48:00.449540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.449719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.449735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:25.986 [2024-11-22 08:48:00.449752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:25.986 [2024-11-22 08:48:00.449762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.485145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.485187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:25.986 [2024-11-22 08:48:00.485205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.383 ms 00:27:25.986 [2024-11-22 08:48:00.485231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.520093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.520131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:25.986 [2024-11-22 08:48:00.520148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.868 ms 00:27:25.986 [2024-11-22 08:48:00.520173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.520919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.520939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:25.986 [2024-11-22 08:48:00.520964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:27:25.986 [2024-11-22 08:48:00.520974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.620969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.621023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:25.986 [2024-11-22 08:48:00.621044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.081 ms 00:27:25.986 [2024-11-22 08:48:00.621055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.658278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.658319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:25.986 [2024-11-22 08:48:00.658334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.199 ms 00:27:25.986 [2024-11-22 08:48:00.658360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.693022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.693058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:25.986 [2024-11-22 08:48:00.693074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.670 ms 00:27:25.986 [2024-11-22 08:48:00.693099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.727016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.727198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:25.986 [2024-11-22 08:48:00.727224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.928 ms 00:27:25.986 [2024-11-22 08:48:00.727234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.727303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.727315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:25.986 [2024-11-22 08:48:00.727332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:25.986 [2024-11-22 08:48:00.727342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.727445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.986 [2024-11-22 08:48:00.727457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:25.986 [2024-11-22 08:48:00.727474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:25.986 [2024-11-22 08:48:00.727484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.986 [2024-11-22 08:48:00.728518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4131.192 ms, result 0 00:27:25.986 { 00:27:25.986 "name": "ftl0", 00:27:25.986 "uuid": "042fedb5-e077-4992-80f2-cab61d09911c" 00:27:25.986 } 00:27:25.986 08:48:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:25.986 08:48:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:25.986 08:48:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:25.986 08:48:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:25.986 08:48:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:26.247 /dev/nbd0 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:26.247 1+0 records in 00:27:26.247 1+0 records out 00:27:26.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046001 s, 8.9 MB/s 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:26.247 08:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:26.247 [2024-11-22 08:48:01.285951] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:27:26.247 [2024-11-22 08:48:01.286078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81154 ] 00:27:26.507 [2024-11-22 08:48:01.467733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.766 [2024-11-22 08:48:01.612690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.144  [2024-11-22T08:48:04.170Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-22T08:48:05.108Z] Copying: 390/1024 [MB] (196 MBps) [2024-11-22T08:48:06.044Z] Copying: 588/1024 [MB] (197 MBps) [2024-11-22T08:48:07.421Z] Copying: 782/1024 [MB] (194 MBps) [2024-11-22T08:48:07.421Z] Copying: 967/1024 [MB] (184 MBps) [2024-11-22T08:48:08.798Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:27:33.711 00:27:33.711 08:48:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:35.696 08:48:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:35.696 [2024-11-22 08:48:10.360332] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:27:35.696 [2024-11-22 08:48:10.360461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81251 ] 00:27:35.696 [2024-11-22 08:48:10.540576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.696 [2024-11-22 08:48:10.682716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.069  [2024-11-22T08:48:13.091Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-22T08:48:14.469Z] Copying: 31/1024 [MB] (14 MBps) [2024-11-22T08:48:15.407Z] Copying: 48/1024 [MB] (16 MBps) [2024-11-22T08:48:16.345Z] Copying: 64/1024 [MB] (16 MBps) [2024-11-22T08:48:17.281Z] Copying: 82/1024 [MB] (17 MBps) [2024-11-22T08:48:18.218Z] Copying: 99/1024 [MB] (17 MBps) [2024-11-22T08:48:19.155Z] Copying: 116/1024 [MB] (17 MBps) [2024-11-22T08:48:20.091Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-22T08:48:21.469Z] Copying: 151/1024 [MB] (17 MBps) [2024-11-22T08:48:22.407Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-22T08:48:23.342Z] Copying: 186/1024 [MB] (17 MBps) [2024-11-22T08:48:24.278Z] Copying: 203/1024 [MB] (17 MBps) [2024-11-22T08:48:25.215Z] Copying: 221/1024 [MB] (17 MBps) [2024-11-22T08:48:26.152Z] Copying: 238/1024 [MB] (17 MBps) [2024-11-22T08:48:27.090Z] Copying: 255/1024 [MB] (17 MBps) [2024-11-22T08:48:28.471Z] Copying: 273/1024 [MB] (17 MBps) [2024-11-22T08:48:29.415Z] Copying: 290/1024 [MB] (17 MBps) [2024-11-22T08:48:30.354Z] Copying: 308/1024 [MB] (17 MBps) [2024-11-22T08:48:31.291Z] Copying: 325/1024 [MB] (17 MBps) [2024-11-22T08:48:32.227Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-22T08:48:33.164Z] Copying: 360/1024 [MB] (17 MBps) [2024-11-22T08:48:34.101Z] Copying: 378/1024 [MB] (17 MBps) [2024-11-22T08:48:35.480Z] Copying: 395/1024 [MB] (17 MBps) [2024-11-22T08:48:36.048Z] Copying: 413/1024 [MB] (17 MBps) [2024-11-22T08:48:37.426Z] Copying: 431/1024 [MB] (17 MBps) [2024-11-22T08:48:38.364Z] Copying: 448/1024 [MB] (17 MBps) [2024-11-22T08:48:39.301Z] Copying: 465/1024 [MB] (17 MBps) [2024-11-22T08:48:40.239Z] Copying: 483/1024 [MB] (17 MBps) [2024-11-22T08:48:41.177Z] Copying: 500/1024 [MB] (17 MBps) [2024-11-22T08:48:42.116Z] Copying: 517/1024 [MB] (17 MBps) [2024-11-22T08:48:43.053Z] Copying: 534/1024 [MB] (16 MBps) [2024-11-22T08:48:44.430Z] Copying: 551/1024 [MB] (17 MBps) [2024-11-22T08:48:45.367Z] Copying: 568/1024 [MB] (16 MBps) [2024-11-22T08:48:46.306Z] Copying: 585/1024 [MB] (16 MBps) [2024-11-22T08:48:47.243Z] Copying: 602/1024 [MB] (17 MBps) [2024-11-22T08:48:48.180Z] Copying: 620/1024 [MB] (17 MBps) [2024-11-22T08:48:49.116Z] Copying: 637/1024 [MB] (17 MBps) [2024-11-22T08:48:50.053Z] Copying: 655/1024 [MB] (17 MBps) [2024-11-22T08:48:51.428Z] Copying: 671/1024 [MB] (16 MBps) [2024-11-22T08:48:52.045Z] Copying: 689/1024 [MB] (17 MBps) [2024-11-22T08:48:53.448Z] Copying: 706/1024 [MB] (17 MBps) [2024-11-22T08:48:54.016Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-22T08:48:55.393Z] Copying: 741/1024 [MB] (17 MBps) [2024-11-22T08:48:56.332Z] Copying: 758/1024 [MB] (16 MBps) [2024-11-22T08:48:57.270Z] Copying: 775/1024 [MB] (17 MBps) [2024-11-22T08:48:58.208Z] Copying: 792/1024 [MB] (17 MBps) [2024-11-22T08:48:59.143Z] Copying: 810/1024 [MB] (17 MBps) [2024-11-22T08:49:00.075Z] Copying: 827/1024 [MB] (17 MBps) [2024-11-22T08:49:01.011Z] Copying: 845/1024 [MB] (17 MBps) [2024-11-22T08:49:02.388Z] Copying: 863/1024 [MB] (17 MBps) [2024-11-22T08:49:03.325Z] Copying: 881/1024 [MB] (17 MBps) [2024-11-22T08:49:04.262Z] Copying: 898/1024 [MB] (17 MBps) [2024-11-22T08:49:05.198Z] Copying: 915/1024 [MB] (17 MBps) [2024-11-22T08:49:06.135Z] Copying: 932/1024 [MB] (17 MBps) [2024-11-22T08:49:07.071Z] Copying: 950/1024 [MB] (17 MBps) [2024-11-22T08:49:08.009Z] Copying: 968/1024 [MB] (17 MBps) [2024-11-22T08:49:09.386Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-22T08:49:10.322Z] Copying: 1002/1024 [MB] (17 MBps) [2024-11-22T08:49:10.322Z] Copying: 1019/1024 [MB] (17 MBps) [2024-11-22T08:49:11.302Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:28:36.215 00:28:36.474 08:49:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:36.474 08:49:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:36.474 08:49:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:36.733 [2024-11-22 08:49:11.719601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.719806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:36.733 [2024-11-22 08:49:11.719831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:36.733 [2024-11-22 08:49:11.719846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.719883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:36.733 [2024-11-22 08:49:11.724224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.724259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:36.733 [2024-11-22 08:49:11.724276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:28:36.733 [2024-11-22 08:49:11.724286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.726331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.726369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:36.733 [2024-11-22 08:49:11.726385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:28:36.733 [2024-11-22 08:49:11.726395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.744432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.744474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:36.733 [2024-11-22 08:49:11.744489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.041 ms 00:28:36.733 [2024-11-22 08:49:11.744515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.749460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.749494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:36.733 [2024-11-22 08:49:11.749508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:28:36.733 [2024-11-22 08:49:11.749517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.784522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.784559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:36.733 [2024-11-22 08:49:11.784574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.985 ms 00:28:36.733 [2024-11-22 08:49:11.784584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.806000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.733 [2024-11-22 08:49:11.806038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:36.733 [2024-11-22 08:49:11.806055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.404 ms 00:28:36.733 [2024-11-22 08:49:11.806067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.733 [2024-11-22 08:49:11.806209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.734 [2024-11-22 08:49:11.806222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:36.734 [2024-11-22 08:49:11.806235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:36.734 [2024-11-22 08:49:11.806244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.994 [2024-11-22 08:49:11.841089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.994 [2024-11-22 08:49:11.841126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:36.994 [2024-11-22 08:49:11.841153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.878 ms 00:28:36.994 [2024-11-22 08:49:11.841162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.994 [2024-11-22 08:49:11.874715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.994 [2024-11-22 08:49:11.874751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:36.994 [2024-11-22 08:49:11.874766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.564 ms 00:28:36.994 [2024-11-22 08:49:11.874775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.994 [2024-11-22 08:49:11.908366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.994 [2024-11-22 08:49:11.908402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:36.994 [2024-11-22 08:49:11.908417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.601 ms 00:28:36.994 [2024-11-22 08:49:11.908441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.994 [2024-11-22 08:49:11.941905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.994 [2024-11-22 08:49:11.941942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:36.994 [2024-11-22 08:49:11.941969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.420 ms 00:28:36.994 [2024-11-22 08:49:11.941979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.994 [2024-11-22 08:49:11.942037] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:36.994 [2024-11-22 08:49:11.942053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:36.994 [2024-11-22 08:49:11.942848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.942993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:36.995 [2024-11-22 08:49:11.943321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:36.995 [2024-11-22 08:49:11.943334] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 042fedb5-e077-4992-80f2-cab61d09911c 00:28:36.995 [2024-11-22 08:49:11.943344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:36.995 [2024-11-22 08:49:11.943359] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:36.995 [2024-11-22 08:49:11.943369] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:36.995 [2024-11-22 08:49:11.943384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:36.995 [2024-11-22 08:49:11.943394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:36.995 [2024-11-22 08:49:11.943406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:36.995 [2024-11-22 08:49:11.943416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:36.995 [2024-11-22 08:49:11.943428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:36.995 [2024-11-22 08:49:11.943437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:36.995 [2024-11-22 08:49:11.943448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.995 [2024-11-22 08:49:11.943459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:36.995 [2024-11-22 08:49:11.943473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:28:36.995 [2024-11-22 08:49:11.943483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:11.963474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.995 [2024-11-22 08:49:11.963508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:36.995 [2024-11-22 08:49:11.963526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.966 ms 00:28:36.995 [2024-11-22 08:49:11.963556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:11.964095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.995 [2024-11-22 08:49:11.964111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:36.995 [2024-11-22 08:49:11.964132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:28:36.995 [2024-11-22 08:49:11.964142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:12.024599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.995 [2024-11-22 08:49:12.024765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.995 [2024-11-22 08:49:12.024806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.995 [2024-11-22 08:49:12.024817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:12.024875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.995 [2024-11-22 08:49:12.024886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.995 [2024-11-22 08:49:12.024899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.995 [2024-11-22 08:49:12.024909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:12.025024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.995 [2024-11-22 08:49:12.025038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.995 [2024-11-22 08:49:12.025055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.995 [2024-11-22 08:49:12.025066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.995 [2024-11-22 08:49:12.025091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.995 [2024-11-22 08:49:12.025101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.995 [2024-11-22 08:49:12.025114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.995 [2024-11-22 08:49:12.025124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.139286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.139338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:37.255 [2024-11-22 08:49:12.139355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.139364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.237735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.237781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:37.255 [2024-11-22 08:49:12.237797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.237824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.237934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.237946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:37.255 [2024-11-22 08:49:12.237959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.237990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.238076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:37.255 [2024-11-22 08:49:12.238088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.238117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.238249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:37.255 [2024-11-22 08:49:12.238262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.238272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.238332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:37.255 [2024-11-22 08:49:12.238346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.238355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.238408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:37.255 [2024-11-22 08:49:12.238420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.238430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.255 [2024-11-22 08:49:12.238493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:37.255 [2024-11-22 08:49:12.238506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.255 [2024-11-22 08:49:12.238515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.255 [2024-11-22 08:49:12.238648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.859 ms, result 0 00:28:37.255 true 00:28:37.255 08:49:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81011 00:28:37.255 08:49:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81011 00:28:37.255 08:49:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:37.514 [2024-11-22 08:49:12.363206] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:28:37.514 [2024-11-22 08:49:12.363323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81879 ] 00:28:37.514 [2024-11-22 08:49:12.545021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.773 [2024-11-22 08:49:12.651483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.153  [2024-11-22T08:49:15.177Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-22T08:49:16.112Z] Copying: 431/1024 [MB] (217 MBps) [2024-11-22T08:49:17.048Z] Copying: 648/1024 [MB] (217 MBps) [2024-11-22T08:49:17.985Z] Copying: 860/1024 [MB] (211 MBps) [2024-11-22T08:49:18.922Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:28:43.835 00:28:43.835 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81011 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:43.835 08:49:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:43.835 [2024-11-22 08:49:18.906393] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:28:43.835 [2024-11-22 08:49:18.906512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81952 ] 00:28:44.094 [2024-11-22 08:49:19.083918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.352 [2024-11-22 08:49:19.195180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.611 [2024-11-22 08:49:19.546943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.611 [2024-11-22 08:49:19.547019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.611 [2024-11-22 08:49:19.612878] blobstore.c:4890:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:44.611 [2024-11-22 08:49:19.613387] blobstore.c:4837:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:44.611 [2024-11-22 08:49:19.613615] blobstore.c:4837:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:44.870 [2024-11-22 08:49:19.930570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.870 [2024-11-22 08:49:19.930775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:44.870 [2024-11-22 08:49:19.930818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:44.870 [2024-11-22 08:49:19.930829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.870 [2024-11-22 08:49:19.930896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.870 [2024-11-22 08:49:19.930910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.870 [2024-11-22 08:49:19.930922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:44.870 [2024-11-22 08:49:19.930932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.870 [2024-11-22 08:49:19.930955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:44.870 [2024-11-22 08:49:19.931965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:44.870 [2024-11-22 08:49:19.931987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.870 [2024-11-22 08:49:19.931997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.870 [2024-11-22 08:49:19.932009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:28:44.870 [2024-11-22 08:49:19.932018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.870 [2024-11-22 08:49:19.933478] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:45.130 [2024-11-22 08:49:19.951349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.951393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:45.130 [2024-11-22 08:49:19.951407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:28:45.130 [2024-11-22 08:49:19.951432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.951504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.951520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:45.130 [2024-11-22 08:49:19.951532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:45.130 [2024-11-22 08:49:19.951542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.958334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.958362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.130 [2024-11-22 08:49:19.958373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.729 ms 00:28:45.130 [2024-11-22 08:49:19.958382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.958454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.958466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.130 [2024-11-22 08:49:19.958476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:45.130 [2024-11-22 08:49:19.958485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.958521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.958535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:45.130 [2024-11-22 08:49:19.958545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:45.130 [2024-11-22 08:49:19.958554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.958576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:45.130 [2024-11-22 08:49:19.963281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.963314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.130 [2024-11-22 08:49:19.963325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:28:45.130 [2024-11-22 08:49:19.963334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.963379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.130 [2024-11-22 08:49:19.963389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:45.130 [2024-11-22 08:49:19.963400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:45.130 [2024-11-22 08:49:19.963409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.130 [2024-11-22 08:49:19.963460] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:45.130 [2024-11-22 08:49:19.963487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:45.130 [2024-11-22 08:49:19.963521] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:45.130 [2024-11-22 08:49:19.963538] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:45.130 [2024-11-22 08:49:19.963624] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:45.130 [2024-11-22 08:49:19.963637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:45.130 [2024-11-22 08:49:19.963649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:45.130 [2024-11-22 08:49:19.963662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:45.131 [2024-11-22 08:49:19.963677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:45.131 [2024-11-22 08:49:19.963688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:45.131 [2024-11-22 08:49:19.963697] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:45.131 [2024-11-22 08:49:19.963707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:45.131 [2024-11-22 08:49:19.963717] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:45.131 [2024-11-22 08:49:19.963728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.131 [2024-11-22 08:49:19.963737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:45.131 [2024-11-22 08:49:19.963748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:45.131 [2024-11-22 08:49:19.963757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.131 [2024-11-22 08:49:19.963825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.131 [2024-11-22 08:49:19.963838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:45.131 [2024-11-22 08:49:19.963848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:45.131 [2024-11-22 08:49:19.963858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.131 [2024-11-22 08:49:19.963948] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:45.131 [2024-11-22 08:49:19.963962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:45.131 [2024-11-22 08:49:19.963993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:45.131 [2024-11-22 08:49:19.964023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:45.131 [2024-11-22 08:49:19.964052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.131 [2024-11-22 08:49:19.964073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:45.131 [2024-11-22 08:49:19.964091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:45.131 [2024-11-22 08:49:19.964100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.131 [2024-11-22 08:49:19.964109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:45.131 [2024-11-22 08:49:19.964119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:45.131 [2024-11-22 08:49:19.964127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:45.131 [2024-11-22 08:49:19.964145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:45.131 [2024-11-22 08:49:19.964173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:45.131 [2024-11-22 08:49:19.964200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:45.131 [2024-11-22 08:49:19.964226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:45.131 [2024-11-22 08:49:19.964252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:45.131 [2024-11-22 08:49:19.964286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.131 [2024-11-22 08:49:19.964304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:45.131 [2024-11-22 08:49:19.964313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:45.131 [2024-11-22 08:49:19.964322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.131 [2024-11-22 08:49:19.964330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:45.131 [2024-11-22 08:49:19.964339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:45.131 [2024-11-22 08:49:19.964348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:45.131 [2024-11-22 08:49:19.964367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:45.131 [2024-11-22 08:49:19.964376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964385] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:45.131 [2024-11-22 08:49:19.964394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:45.131 [2024-11-22 08:49:19.964404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.131 [2024-11-22 08:49:19.964426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:45.131 [2024-11-22 08:49:19.964435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:45.131 [2024-11-22 08:49:19.964444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:45.131 [2024-11-22 08:49:19.964453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:45.131 [2024-11-22 08:49:19.964461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:45.131 [2024-11-22 08:49:19.964471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:45.131 [2024-11-22 08:49:19.964481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:45.131 [2024-11-22 08:49:19.964493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:45.131 [2024-11-22 08:49:19.964515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:45.131 [2024-11-22 08:49:19.964540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:45.131 [2024-11-22 08:49:19.964550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:45.131 [2024-11-22 08:49:19.964560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:45.131 [2024-11-22 08:49:19.964571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:45.131 [2024-11-22 08:49:19.964581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:45.131 [2024-11-22 08:49:19.964591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:45.131 [2024-11-22 08:49:19.964601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:45.131 [2024-11-22 08:49:19.964611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:45.131 [2024-11-22 08:49:19.964660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:45.131 [2024-11-22 08:49:19.964672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.131 [2024-11-22 08:49:19.964694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:45.131 [2024-11-22 08:49:19.964708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:45.131 [2024-11-22 08:49:19.964719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:45.131 [2024-11-22 08:49:19.964729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.131 [2024-11-22 08:49:19.964740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:45.131 [2024-11-22 08:49:19.964750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:28:45.131 [2024-11-22 08:49:19.964760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.131 [2024-11-22 08:49:20.004206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.131 [2024-11-22 08:49:20.004391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.131 [2024-11-22 08:49:20.004416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.463 ms 00:28:45.131 [2024-11-22 08:49:20.004428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.131 [2024-11-22 08:49:20.004520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.131 [2024-11-22 08:49:20.004537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:45.131 [2024-11-22 08:49:20.004548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:45.131 [2024-11-22 08:49:20.004558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.131 [2024-11-22 08:49:20.061008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.061053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.132 [2024-11-22 08:49:20.061068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.480 ms 00:28:45.132 [2024-11-22 08:49:20.061082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.061145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.061156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:45.132 [2024-11-22 08:49:20.061166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:45.132 [2024-11-22 08:49:20.061175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.061672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.061686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:45.132 [2024-11-22 08:49:20.061696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:28:45.132 [2024-11-22 08:49:20.061705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.061822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.061834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:45.132 [2024-11-22 08:49:20.061844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:45.132 [2024-11-22 08:49:20.061854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.081329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.081530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:45.132 [2024-11-22 08:49:20.081617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.487 ms 00:28:45.132 [2024-11-22 08:49:20.081654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.100281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:45.132 [2024-11-22 08:49:20.100469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:45.132 [2024-11-22 08:49:20.100585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.100624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:45.132 [2024-11-22 08:49:20.100658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.806 ms 00:28:45.132 [2024-11-22 08:49:20.100688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.130562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.130773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:45.132 [2024-11-22 08:49:20.130893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.859 ms 00:28:45.132 [2024-11-22 08:49:20.130918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.151273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.151430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:45.132 [2024-11-22 08:49:20.151567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.257 ms 00:28:45.132 [2024-11-22 08:49:20.151608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.170147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.170300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:45.132 [2024-11-22 08:49:20.170423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.453 ms 00:28:45.132 [2024-11-22 08:49:20.170460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.132 [2024-11-22 08:49:20.171394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.132 [2024-11-22 08:49:20.171527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:45.132 [2024-11-22 08:49:20.171609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:28:45.132 [2024-11-22 08:49:20.171646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.257637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.257891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:45.391 [2024-11-22 08:49:20.257919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.045 ms 00:28:45.391 [2024-11-22 08:49:20.257930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.269767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:45.391 [2024-11-22 08:49:20.272987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.273019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:45.391 [2024-11-22 08:49:20.273034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.011 ms 00:28:45.391 [2024-11-22 08:49:20.273044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.273151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.273166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:45.391 [2024-11-22 08:49:20.273178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:45.391 [2024-11-22 08:49:20.273188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.273312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.273329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:45.391 [2024-11-22 08:49:20.273341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:45.391 [2024-11-22 08:49:20.273351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.273378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.273394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:45.391 [2024-11-22 08:49:20.273405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:45.391 [2024-11-22 08:49:20.273415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.273449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:45.391 [2024-11-22 08:49:20.273461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.273471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:45.391 [2024-11-22 08:49:20.273481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:45.391 [2024-11-22 08:49:20.273492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.310718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.310765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:45.391 [2024-11-22 08:49:20.310781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.261 ms 00:28:45.391 [2024-11-22 08:49:20.310792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.310875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.391 [2024-11-22 08:49:20.310888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:45.391 [2024-11-22 08:49:20.310899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:45.391 [2024-11-22 08:49:20.310910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.391 [2024-11-22 08:49:20.312228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.795 ms, result 0 00:28:46.327  [2024-11-22T08:49:22.354Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-22T08:49:23.731Z] Copying: 50/1024 [MB] (24 MBps) [2024-11-22T08:49:24.668Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-22T08:49:25.605Z] Copying: 100/1024 [MB] (25 MBps) [2024-11-22T08:49:26.543Z] Copying: 125/1024 [MB] (25 MBps) [2024-11-22T08:49:27.480Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-22T08:49:28.419Z] Copying: 176/1024 [MB] (25 MBps) [2024-11-22T08:49:29.403Z] Copying: 201/1024 [MB] (25 MBps) [2024-11-22T08:49:30.343Z] Copying: 226/1024 [MB] (24 MBps) [2024-11-22T08:49:31.722Z] Copying: 251/1024 [MB] (25 MBps) [2024-11-22T08:49:32.660Z] Copying: 276/1024 [MB] (24 MBps) [2024-11-22T08:49:33.599Z] Copying: 300/1024 [MB] (24 MBps) [2024-11-22T08:49:34.536Z] Copying: 325/1024 [MB] (24 MBps) [2024-11-22T08:49:35.474Z] Copying: 349/1024 [MB] (24 MBps) [2024-11-22T08:49:36.412Z] Copying: 373/1024 [MB] (24 MBps) [2024-11-22T08:49:37.350Z] Copying: 397/1024 [MB] (23 MBps) [2024-11-22T08:49:38.727Z] Copying: 421/1024 [MB] (23 MBps) [2024-11-22T08:49:39.665Z] Copying: 446/1024 [MB] (25 MBps) [2024-11-22T08:49:40.602Z] Copying: 472/1024 [MB] (25 MBps) [2024-11-22T08:49:41.547Z] Copying: 498/1024 [MB] (26 MBps) [2024-11-22T08:49:42.486Z] Copying: 523/1024 [MB] (25 MBps) [2024-11-22T08:49:43.426Z] Copying: 549/1024 [MB] (25 MBps) [2024-11-22T08:49:44.365Z] Copying: 575/1024 [MB] (25 MBps) [2024-11-22T08:49:45.306Z] Copying: 600/1024 [MB] (25 MBps) [2024-11-22T08:49:46.729Z] Copying: 625/1024 [MB] (25 MBps) [2024-11-22T08:49:47.297Z] Copying: 650/1024 [MB] (25 MBps) [2024-11-22T08:49:48.675Z] Copying: 675/1024 [MB] (24 MBps) [2024-11-22T08:49:49.614Z] Copying: 700/1024 [MB] (24 MBps) [2024-11-22T08:49:50.553Z] Copying: 724/1024 [MB] (24 MBps) [2024-11-22T08:49:51.494Z] Copying: 750/1024 [MB] (25 MBps) [2024-11-22T08:49:52.432Z] Copying: 775/1024 [MB] (25 MBps) [2024-11-22T08:49:53.372Z] Copying: 799/1024 [MB] (24 MBps) [2024-11-22T08:49:54.311Z] Copying: 824/1024 [MB] (24 MBps) [2024-11-22T08:49:55.690Z] Copying: 849/1024 [MB] (24 MBps) [2024-11-22T08:49:56.629Z] Copying: 874/1024 [MB] (25 MBps) [2024-11-22T08:49:57.568Z] Copying: 900/1024 [MB] (25 MBps) [2024-11-22T08:49:58.507Z] Copying: 924/1024 [MB] (24 MBps) [2024-11-22T08:49:59.446Z] Copying: 950/1024 [MB] (25 MBps) [2024-11-22T08:50:00.383Z] Copying: 975/1024 [MB] (25 MBps) [2024-11-22T08:50:01.322Z] Copying: 1000/1024 [MB] (24 MBps) [2024-11-22T08:50:01.957Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-22T08:50:01.957Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-22 08:50:01.886275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.870 [2024-11-22 08:50:01.886364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:26.870 [2024-11-22 08:50:01.886386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:26.870 [2024-11-22 08:50:01.886398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.870 [2024-11-22 08:50:01.889902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:26.870 [2024-11-22 08:50:01.895888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.870 [2024-11-22 08:50:01.895927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:26.870 [2024-11-22 08:50:01.895942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.945 ms 00:29:26.870 [2024-11-22 08:50:01.895964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.870 [2024-11-22 08:50:01.904602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.870 [2024-11-22 08:50:01.904645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:26.870 [2024-11-22 08:50:01.904659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.546 ms 00:29:26.870 [2024-11-22 08:50:01.904670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.870 [2024-11-22 08:50:01.928931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.870 [2024-11-22 08:50:01.928983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:26.870 [2024-11-22 08:50:01.928998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.280 ms 00:29:26.870 [2024-11-22 08:50:01.929009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.870 [2024-11-22 08:50:01.933968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.870 [2024-11-22 08:50:01.934176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:26.870 [2024-11-22 08:50:01.934198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:29:26.870 [2024-11-22 08:50:01.934208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.130 [2024-11-22 08:50:01.971243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.130 [2024-11-22 08:50:01.971282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.130 [2024-11-22 08:50:01.971296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.031 ms 00:29:27.130 [2024-11-22 08:50:01.971308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.130 [2024-11-22 08:50:01.992794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.130 [2024-11-22 08:50:01.992842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.130 [2024-11-22 08:50:01.992857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.482 ms 00:29:27.130 [2024-11-22 08:50:01.992868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.130 [2024-11-22 08:50:02.108129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.130 [2024-11-22 08:50:02.108168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.130 [2024-11-22 08:50:02.108183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.408 ms 00:29:27.130 [2024-11-22 08:50:02.108201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.130 [2024-11-22 08:50:02.143979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.130 [2024-11-22 08:50:02.144015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:27.130 [2024-11-22 08:50:02.144028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.819 ms 00:29:27.130 [2024-11-22 08:50:02.144038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.130 [2024-11-22 08:50:02.179371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.130 [2024-11-22 08:50:02.179407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:27.130 [2024-11-22 08:50:02.179420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.353 ms 00:29:27.130 [2024-11-22 08:50:02.179429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.390 [2024-11-22 08:50:02.213871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.390 [2024-11-22 08:50:02.213909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.390 [2024-11-22 08:50:02.213922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.461 ms 00:29:27.390 [2024-11-22 08:50:02.213933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.390 [2024-11-22 08:50:02.247512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.390 [2024-11-22 08:50:02.247549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.390 [2024-11-22 08:50:02.247561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.547 ms 00:29:27.390 [2024-11-22 08:50:02.247571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.390 [2024-11-22 08:50:02.247608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.390 [2024-11-22 08:50:02.247625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99584 / 261120 wr_cnt: 1 state: open 00:29:27.390 [2024-11-22 08:50:02.247638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.247998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.390 [2024-11-22 08:50:02.248472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.391 [2024-11-22 08:50:02.248722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.391 [2024-11-22 08:50:02.248731] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 042fedb5-e077-4992-80f2-cab61d09911c 00:29:27.391 [2024-11-22 08:50:02.248742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99584 00:29:27.391 [2024-11-22 08:50:02.248758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100544 00:29:27.391 [2024-11-22 08:50:02.248779] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99584 00:29:27.391 [2024-11-22 08:50:02.248789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:29:27.391 [2024-11-22 08:50:02.248798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.391 [2024-11-22 08:50:02.248808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.391 [2024-11-22 08:50:02.248818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.391 [2024-11-22 08:50:02.248828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.391 [2024-11-22 08:50:02.248837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.391 [2024-11-22 08:50:02.248847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.391 [2024-11-22 08:50:02.248857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.391 [2024-11-22 08:50:02.248867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:29:27.391 [2024-11-22 08:50:02.248877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.269065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.391 [2024-11-22 08:50:02.269242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.391 [2024-11-22 08:50:02.269263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.187 ms 00:29:27.391 [2024-11-22 08:50:02.269274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.269893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.391 [2024-11-22 08:50:02.269910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.391 [2024-11-22 08:50:02.269922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:29:27.391 [2024-11-22 08:50:02.269932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.322624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.391 [2024-11-22 08:50:02.322658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.391 [2024-11-22 08:50:02.322671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.391 [2024-11-22 08:50:02.322681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.322753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.391 [2024-11-22 08:50:02.322765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.391 [2024-11-22 08:50:02.322776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.391 [2024-11-22 08:50:02.322786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.322868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.391 [2024-11-22 08:50:02.322881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.391 [2024-11-22 08:50:02.322892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.391 [2024-11-22 08:50:02.322902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.322918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.391 [2024-11-22 08:50:02.322929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.391 [2024-11-22 08:50:02.322939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.391 [2024-11-22 08:50:02.322949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.391 [2024-11-22 08:50:02.450409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.391 [2024-11-22 08:50:02.450461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.391 [2024-11-22 08:50:02.450478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.391 [2024-11-22 08:50:02.450488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.650 [2024-11-22 08:50:02.551243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.651 [2024-11-22 08:50:02.551311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.651 [2024-11-22 08:50:02.551466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.651 [2024-11-22 08:50:02.551546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.651 [2024-11-22 08:50:02.551710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:27.651 [2024-11-22 08:50:02.551781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.651 [2024-11-22 08:50:02.551865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.551924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.651 [2024-11-22 08:50:02.551936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.651 [2024-11-22 08:50:02.551946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.651 [2024-11-22 08:50:02.551979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.651 [2024-11-22 08:50:02.552126] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 668.143 ms, result 0 00:29:29.029 00:29:29.029 00:29:29.029 08:50:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:30.410 08:50:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:30.669 [2024-11-22 08:50:05.501091] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:29:30.669 [2024-11-22 08:50:05.501202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82418 ] 00:29:30.669 [2024-11-22 08:50:05.680588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.928 [2024-11-22 08:50:05.785817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.187 [2024-11-22 08:50:06.120050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:31.187 [2024-11-22 08:50:06.120284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:31.447 [2024-11-22 08:50:06.279812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.280043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:31.447 [2024-11-22 08:50:06.280176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.447 [2024-11-22 08:50:06.280195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.447 [2024-11-22 08:50:06.280261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.280275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.447 [2024-11-22 08:50:06.280290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:31.447 [2024-11-22 08:50:06.280301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.447 [2024-11-22 08:50:06.280325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:31.447 [2024-11-22 08:50:06.281454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:31.447 [2024-11-22 08:50:06.281611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.281694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.447 [2024-11-22 08:50:06.281734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:29:31.447 [2024-11-22 08:50:06.281766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.447 [2024-11-22 08:50:06.283415] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:31.447 [2024-11-22 08:50:06.302364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.302506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:31.447 [2024-11-22 08:50:06.302651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.981 ms 00:29:31.447 [2024-11-22 08:50:06.302669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.447 [2024-11-22 08:50:06.302741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.302755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:31.447 [2024-11-22 08:50:06.302766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:31.447 [2024-11-22 08:50:06.302776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.447 [2024-11-22 08:50:06.309535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.447 [2024-11-22 08:50:06.309562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.448 [2024-11-22 08:50:06.309574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:29:31.448 [2024-11-22 08:50:06.309583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.309659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.309672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.448 [2024-11-22 08:50:06.309681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:31.448 [2024-11-22 08:50:06.309691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.309728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.309739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:31.448 [2024-11-22 08:50:06.309749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:31.448 [2024-11-22 08:50:06.309758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.309780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:31.448 [2024-11-22 08:50:06.314539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.314569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.448 [2024-11-22 08:50:06.314580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:29:31.448 [2024-11-22 08:50:06.314609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.314639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.314650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:31.448 [2024-11-22 08:50:06.314661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:31.448 [2024-11-22 08:50:06.314670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.314731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:31.448 [2024-11-22 08:50:06.314755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:31.448 [2024-11-22 08:50:06.314788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:31.448 [2024-11-22 08:50:06.314808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:31.448 [2024-11-22 08:50:06.314894] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:31.448 [2024-11-22 08:50:06.314907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:31.448 [2024-11-22 08:50:06.314919] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:31.448 [2024-11-22 08:50:06.314932] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:31.448 [2024-11-22 08:50:06.314944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:31.448 [2024-11-22 08:50:06.314974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:31.448 [2024-11-22 08:50:06.314985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:31.448 [2024-11-22 08:50:06.314994] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:31.448 [2024-11-22 08:50:06.315004] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:31.448 [2024-11-22 08:50:06.315018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.315028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:31.448 [2024-11-22 08:50:06.315039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:29:31.448 [2024-11-22 08:50:06.315048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.315122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.448 [2024-11-22 08:50:06.315150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:31.448 [2024-11-22 08:50:06.315160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:31.448 [2024-11-22 08:50:06.315170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.448 [2024-11-22 08:50:06.315262] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:31.448 [2024-11-22 08:50:06.315280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:31.448 [2024-11-22 08:50:06.315291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:31.448 [2024-11-22 08:50:06.315321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:31.448 [2024-11-22 08:50:06.315350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.448 [2024-11-22 08:50:06.315370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:31.448 [2024-11-22 08:50:06.315379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:31.448 [2024-11-22 08:50:06.315388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.448 [2024-11-22 08:50:06.315397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:31.448 [2024-11-22 08:50:06.315406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:31.448 [2024-11-22 08:50:06.315424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:31.448 [2024-11-22 08:50:06.315442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:31.448 [2024-11-22 08:50:06.315470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:31.448 [2024-11-22 08:50:06.315498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:31.448 [2024-11-22 08:50:06.315525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:31.448 [2024-11-22 08:50:06.315553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.448 [2024-11-22 08:50:06.315570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:31.448 [2024-11-22 08:50:06.315579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:31.448 [2024-11-22 08:50:06.315588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.448 [2024-11-22 08:50:06.315597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:31.448 [2024-11-22 08:50:06.315606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:31.448 [2024-11-22 08:50:06.315615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.448 [2024-11-22 08:50:06.315624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:31.448 [2024-11-22 08:50:06.315633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:31.449 [2024-11-22 08:50:06.315643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.449 [2024-11-22 08:50:06.315653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:31.449 [2024-11-22 08:50:06.315661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:31.449 [2024-11-22 08:50:06.315670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.449 [2024-11-22 08:50:06.315679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:31.449 [2024-11-22 08:50:06.315689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:31.449 [2024-11-22 08:50:06.315699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.449 [2024-11-22 08:50:06.315708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.449 [2024-11-22 08:50:06.315718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:31.449 [2024-11-22 08:50:06.315727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:31.449 [2024-11-22 08:50:06.315737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:31.449 [2024-11-22 08:50:06.315746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:31.449 [2024-11-22 08:50:06.315755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:31.449 [2024-11-22 08:50:06.315765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:31.449 [2024-11-22 08:50:06.315775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:31.449 [2024-11-22 08:50:06.315787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:31.449 [2024-11-22 08:50:06.315810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:31.449 [2024-11-22 08:50:06.315820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:31.449 [2024-11-22 08:50:06.315831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:31.449 [2024-11-22 08:50:06.315841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:31.449 [2024-11-22 08:50:06.315851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:31.449 [2024-11-22 08:50:06.315861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:31.449 [2024-11-22 08:50:06.315871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:31.449 [2024-11-22 08:50:06.315881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:31.449 [2024-11-22 08:50:06.315891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:31.449 [2024-11-22 08:50:06.315943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:31.449 [2024-11-22 08:50:06.315957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:31.449 [2024-11-22 08:50:06.315991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:31.449 [2024-11-22 08:50:06.316002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:31.449 [2024-11-22 08:50:06.316013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:31.449 [2024-11-22 08:50:06.316024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.316034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:31.449 [2024-11-22 08:50:06.316045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:29:31.449 [2024-11-22 08:50:06.316054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.354460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.354495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.449 [2024-11-22 08:50:06.354508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.424 ms 00:29:31.449 [2024-11-22 08:50:06.354519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.354594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.354604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:31.449 [2024-11-22 08:50:06.354614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:31.449 [2024-11-22 08:50:06.354623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.422911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.422946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.449 [2024-11-22 08:50:06.422985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.347 ms 00:29:31.449 [2024-11-22 08:50:06.422996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.423035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.423047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.449 [2024-11-22 08:50:06.423058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:31.449 [2024-11-22 08:50:06.423072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.423567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.423587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.449 [2024-11-22 08:50:06.423598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:29:31.449 [2024-11-22 08:50:06.423609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.423724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.423737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.449 [2024-11-22 08:50:06.423748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:29:31.449 [2024-11-22 08:50:06.423764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.441946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.441992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.449 [2024-11-22 08:50:06.442009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.191 ms 00:29:31.449 [2024-11-22 08:50:06.442036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.459768] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:31.449 [2024-11-22 08:50:06.459806] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:31.449 [2024-11-22 08:50:06.459820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.459831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:31.449 [2024-11-22 08:50:06.459842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.713 ms 00:29:31.449 [2024-11-22 08:50:06.459851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.487700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.487745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:31.449 [2024-11-22 08:50:06.487758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.854 ms 00:29:31.449 [2024-11-22 08:50:06.487768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.504832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.449 [2024-11-22 08:50:06.504878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:31.449 [2024-11-22 08:50:06.504890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.050 ms 00:29:31.449 [2024-11-22 08:50:06.504914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.449 [2024-11-22 08:50:06.522314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.450 [2024-11-22 08:50:06.522452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:31.450 [2024-11-22 08:50:06.522471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.391 ms 00:29:31.450 [2024-11-22 08:50:06.522497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.450 [2024-11-22 08:50:06.523213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.450 [2024-11-22 08:50:06.523240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:31.450 [2024-11-22 08:50:06.523252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:29:31.450 [2024-11-22 08:50:06.523266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.603573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.603635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:31.709 [2024-11-22 08:50:06.603658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.415 ms 00:29:31.709 [2024-11-22 08:50:06.603668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.614048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:31.709 [2024-11-22 08:50:06.616282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.616321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:31.709 [2024-11-22 08:50:06.616333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.589 ms 00:29:31.709 [2024-11-22 08:50:06.616343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.616415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.616427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:31.709 [2024-11-22 08:50:06.616438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:31.709 [2024-11-22 08:50:06.616450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.617899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.617937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:31.709 [2024-11-22 08:50:06.617949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:29:31.709 [2024-11-22 08:50:06.617972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.617999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.618011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:31.709 [2024-11-22 08:50:06.618021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.709 [2024-11-22 08:50:06.618031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.618070] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:31.709 [2024-11-22 08:50:06.618086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.618096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:31.709 [2024-11-22 08:50:06.618106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:31.709 [2024-11-22 08:50:06.618116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.653515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.653667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:31.709 [2024-11-22 08:50:06.653688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.438 ms 00:29:31.709 [2024-11-22 08:50:06.653722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.653849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.709 [2024-11-22 08:50:06.653864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:31.709 [2024-11-22 08:50:06.653875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:31.709 [2024-11-22 08:50:06.653885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.709 [2024-11-22 08:50:06.654933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.291 ms, result 0 00:29:33.089  [2024-11-22T08:50:09.142Z] Copying: 1216/1048576 [kB] (1216 kBps) [2024-11-22T08:50:10.079Z] Copying: 8588/1048576 [kB] (7372 kBps) [2024-11-22T08:50:11.017Z] Copying: 41/1024 [MB] (33 MBps) [2024-11-22T08:50:12.011Z] Copying: 74/1024 [MB] (32 MBps) [2024-11-22T08:50:12.950Z] Copying: 107/1024 [MB] (33 MBps) [2024-11-22T08:50:13.888Z] Copying: 140/1024 [MB] (33 MBps) [2024-11-22T08:50:15.266Z] Copying: 174/1024 [MB] (33 MBps) [2024-11-22T08:50:16.203Z] Copying: 207/1024 [MB] (32 MBps) [2024-11-22T08:50:17.139Z] Copying: 241/1024 [MB] (34 MBps) [2024-11-22T08:50:18.126Z] Copying: 274/1024 [MB] (33 MBps) [2024-11-22T08:50:19.065Z] Copying: 308/1024 [MB] (33 MBps) [2024-11-22T08:50:20.003Z] Copying: 342/1024 [MB] (33 MBps) [2024-11-22T08:50:20.942Z] Copying: 375/1024 [MB] (33 MBps) [2024-11-22T08:50:21.880Z] Copying: 406/1024 [MB] (30 MBps) [2024-11-22T08:50:23.256Z] Copying: 436/1024 [MB] (30 MBps) [2024-11-22T08:50:24.193Z] Copying: 467/1024 [MB] (30 MBps) [2024-11-22T08:50:25.129Z] Copying: 498/1024 [MB] (31 MBps) [2024-11-22T08:50:26.066Z] Copying: 529/1024 [MB] (31 MBps) [2024-11-22T08:50:27.002Z] Copying: 560/1024 [MB] (30 MBps) [2024-11-22T08:50:27.939Z] Copying: 589/1024 [MB] (29 MBps) [2024-11-22T08:50:28.876Z] Copying: 620/1024 [MB] (31 MBps) [2024-11-22T08:50:30.252Z] Copying: 651/1024 [MB] (31 MBps) [2024-11-22T08:50:31.190Z] Copying: 683/1024 [MB] (31 MBps) [2024-11-22T08:50:32.127Z] Copying: 713/1024 [MB] (30 MBps) [2024-11-22T08:50:33.066Z] Copying: 744/1024 [MB] (31 MBps) [2024-11-22T08:50:34.048Z] Copying: 775/1024 [MB] (30 MBps) [2024-11-22T08:50:34.985Z] Copying: 806/1024 [MB] (31 MBps) [2024-11-22T08:50:35.922Z] Copying: 838/1024 [MB] (31 MBps) [2024-11-22T08:50:36.861Z] Copying: 869/1024 [MB] (31 MBps) [2024-11-22T08:50:38.241Z] Copying: 900/1024 [MB] (30 MBps) [2024-11-22T08:50:39.180Z] Copying: 932/1024 [MB] (31 MBps) [2024-11-22T08:50:40.117Z] Copying: 963/1024 [MB] (31 MBps) [2024-11-22T08:50:41.055Z] Copying: 994/1024 [MB] (31 MBps) [2024-11-22T08:50:41.994Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-22 08:50:41.846046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.907 [2024-11-22 08:50:41.846188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:06.907 [2024-11-22 08:50:41.846266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:06.907 [2024-11-22 08:50:41.846303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.907 [2024-11-22 08:50:41.846376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:06.908 [2024-11-22 08:50:41.855570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.855886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:06.908 [2024-11-22 08:50:41.855934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.155 ms 00:30:06.908 [2024-11-22 08:50:41.855978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.856424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.856461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:06.908 [2024-11-22 08:50:41.856497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:30:06.908 [2024-11-22 08:50:41.856519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.871450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.871509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:06.908 [2024-11-22 08:50:41.871528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.917 ms 00:30:06.908 [2024-11-22 08:50:41.871542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.876523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.876570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:06.908 [2024-11-22 08:50:41.876585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:30:06.908 [2024-11-22 08:50:41.876624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.913631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.913821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:06.908 [2024-11-22 08:50:41.913862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.008 ms 00:30:06.908 [2024-11-22 08:50:41.913886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.934720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.934764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:06.908 [2024-11-22 08:50:41.934780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.823 ms 00:30:06.908 [2024-11-22 08:50:41.934808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.937138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.937182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:06.908 [2024-11-22 08:50:41.937198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:30:06.908 [2024-11-22 08:50:41.937210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.908 [2024-11-22 08:50:41.971888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.908 [2024-11-22 08:50:41.971930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:06.908 [2024-11-22 08:50:41.971945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.705 ms 00:30:06.908 [2024-11-22 08:50:41.971982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.168 [2024-11-22 08:50:42.007373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.168 [2024-11-22 08:50:42.007426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:07.168 [2024-11-22 08:50:42.007455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.407 ms 00:30:07.168 [2024-11-22 08:50:42.007466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.168 [2024-11-22 08:50:42.041868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.168 [2024-11-22 08:50:42.041909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:07.168 [2024-11-22 08:50:42.041925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.417 ms 00:30:07.168 [2024-11-22 08:50:42.041952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.168 [2024-11-22 08:50:42.075877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.168 [2024-11-22 08:50:42.076078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:07.168 [2024-11-22 08:50:42.076102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.886 ms 00:30:07.168 [2024-11-22 08:50:42.076114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.168 [2024-11-22 08:50:42.076180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:07.168 [2024-11-22 08:50:42.076202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:07.168 [2024-11-22 08:50:42.076218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:07.168 [2024-11-22 08:50:42.076231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:07.168 [2024-11-22 08:50:42.076304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.076993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:07.169 [2024-11-22 08:50:42.077416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:07.170 [2024-11-22 08:50:42.077428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:07.170 [2024-11-22 08:50:42.077440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:07.170 [2024-11-22 08:50:42.077452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:07.170 [2024-11-22 08:50:42.077470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:07.170 [2024-11-22 08:50:42.077481] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 042fedb5-e077-4992-80f2-cab61d09911c 00:30:07.170 [2024-11-22 08:50:42.077504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:07.170 [2024-11-22 08:50:42.077516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 165056 00:30:07.170 [2024-11-22 08:50:42.077526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 163072 00:30:07.170 [2024-11-22 08:50:42.077542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:30:07.170 [2024-11-22 08:50:42.077553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:07.170 [2024-11-22 08:50:42.077564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:07.170 [2024-11-22 08:50:42.077575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:07.170 [2024-11-22 08:50:42.077595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:07.170 [2024-11-22 08:50:42.077605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:07.170 [2024-11-22 08:50:42.077616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.170 [2024-11-22 08:50:42.077627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:07.170 [2024-11-22 08:50:42.077639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:30:07.170 [2024-11-22 08:50:42.077650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.096442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.170 [2024-11-22 08:50:42.096484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:07.170 [2024-11-22 08:50:42.096498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.783 ms 00:30:07.170 [2024-11-22 08:50:42.096509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.097075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.170 [2024-11-22 08:50:42.097089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:07.170 [2024-11-22 08:50:42.097102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:30:07.170 [2024-11-22 08:50:42.097114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.145075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.170 [2024-11-22 08:50:42.145113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:07.170 [2024-11-22 08:50:42.145128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.170 [2024-11-22 08:50:42.145156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.145236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.170 [2024-11-22 08:50:42.145251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:07.170 [2024-11-22 08:50:42.145263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.170 [2024-11-22 08:50:42.145275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.145348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.170 [2024-11-22 08:50:42.145369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:07.170 [2024-11-22 08:50:42.145380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.170 [2024-11-22 08:50:42.145391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.170 [2024-11-22 08:50:42.145409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.170 [2024-11-22 08:50:42.145420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:07.170 [2024-11-22 08:50:42.145432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.170 [2024-11-22 08:50:42.145442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.264209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.264441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:07.430 [2024-11-22 08:50:42.264627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.264671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.360618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.360802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:07.430 [2024-11-22 08:50:42.361003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:07.430 [2024-11-22 08:50:42.361298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:07.430 [2024-11-22 08:50:42.361381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:07.430 [2024-11-22 08:50:42.361541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:07.430 [2024-11-22 08:50:42.361633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:07.430 [2024-11-22 08:50:42.361711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.430 [2024-11-22 08:50:42.361785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:07.430 [2024-11-22 08:50:42.361798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.430 [2024-11-22 08:50:42.361810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.430 [2024-11-22 08:50:42.361935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.730 ms, result 0 00:30:08.367 00:30:08.367 00:30:08.367 08:50:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:10.275 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:10.275 08:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:10.275 [2024-11-22 08:50:45.080370] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:30:10.275 [2024-11-22 08:50:45.080663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82815 ] 00:30:10.275 [2024-11-22 08:50:45.261198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.533 [2024-11-22 08:50:45.364611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.792 [2024-11-22 08:50:45.717622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.792 [2024-11-22 08:50:45.717695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.051 [2024-11-22 08:50:45.879782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.879837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:11.051 [2024-11-22 08:50:45.879876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.051 [2024-11-22 08:50:45.879888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.879941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.879955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.051 [2024-11-22 08:50:45.879985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:11.051 [2024-11-22 08:50:45.879997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.880022] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:11.051 [2024-11-22 08:50:45.880995] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:11.051 [2024-11-22 08:50:45.881189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.881206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.051 [2024-11-22 08:50:45.881220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:30:11.051 [2024-11-22 08:50:45.881232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.882800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:11.051 [2024-11-22 08:50:45.900530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.900685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:11.051 [2024-11-22 08:50:45.900725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.759 ms 00:30:11.051 [2024-11-22 08:50:45.900738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.900807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.900822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:11.051 [2024-11-22 08:50:45.900835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:11.051 [2024-11-22 08:50:45.900848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.907776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.907936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.051 [2024-11-22 08:50:45.907989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:30:11.051 [2024-11-22 08:50:45.908002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.908095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.908111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.051 [2024-11-22 08:50:45.908124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:11.051 [2024-11-22 08:50:45.908136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.908183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.908196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:11.051 [2024-11-22 08:50:45.908209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:11.051 [2024-11-22 08:50:45.908221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.908249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.051 [2024-11-22 08:50:45.912821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.912855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.051 [2024-11-22 08:50:45.912879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:30:11.051 [2024-11-22 08:50:45.912895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.051 [2024-11-22 08:50:45.912933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.051 [2024-11-22 08:50:45.912946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.051 [2024-11-22 08:50:45.912971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:11.051 [2024-11-22 08:50:45.913000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.052 [2024-11-22 08:50:45.913059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:11.052 [2024-11-22 08:50:45.913084] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:11.052 [2024-11-22 08:50:45.913120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:11.052 [2024-11-22 08:50:45.913143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:11.052 [2024-11-22 08:50:45.913230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:11.052 [2024-11-22 08:50:45.913246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.052 [2024-11-22 08:50:45.913261] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:11.052 [2024-11-22 08:50:45.913275] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:11.052 [2024-11-22 08:50:45.913313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.052 [2024-11-22 08:50:45.913325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:11.052 [2024-11-22 08:50:45.913336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:11.052 [2024-11-22 08:50:45.913353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.052 [2024-11-22 08:50:45.913365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.052 [2024-11-22 08:50:45.913377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:30:11.052 [2024-11-22 08:50:45.913404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.052 [2024-11-22 08:50:45.913483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.052 [2024-11-22 08:50:45.913497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.052 [2024-11-22 08:50:45.913509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:11.052 [2024-11-22 08:50:45.913520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.052 [2024-11-22 08:50:45.913620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.052 [2024-11-22 08:50:45.913641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.052 [2024-11-22 08:50:45.913654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.052 [2024-11-22 08:50:45.913689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.052 [2024-11-22 08:50:45.913723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.052 [2024-11-22 08:50:45.913746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.052 [2024-11-22 08:50:45.913759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:11.052 [2024-11-22 08:50:45.913770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.052 [2024-11-22 08:50:45.913782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.052 [2024-11-22 08:50:45.913794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:11.052 [2024-11-22 08:50:45.913815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.052 [2024-11-22 08:50:45.913837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.052 [2024-11-22 08:50:45.913870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.052 [2024-11-22 08:50:45.913904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.052 [2024-11-22 08:50:45.913936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.052 [2024-11-22 08:50:45.913958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.052 [2024-11-22 08:50:45.913969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:11.052 [2024-11-22 08:50:45.913979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.052 [2024-11-22 08:50:45.914011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.052 [2024-11-22 08:50:45.914023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:11.052 [2024-11-22 08:50:45.914034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.052 [2024-11-22 08:50:45.914045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.052 [2024-11-22 08:50:45.914056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:11.052 [2024-11-22 08:50:45.914067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.052 [2024-11-22 08:50:45.914078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:11.052 [2024-11-22 08:50:45.914089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:11.052 [2024-11-22 08:50:45.914101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.914112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:11.052 [2024-11-22 08:50:45.914123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:11.052 [2024-11-22 08:50:45.914134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.914145] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.052 [2024-11-22 08:50:45.914157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.052 [2024-11-22 08:50:45.914168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.052 [2024-11-22 08:50:45.914180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.052 [2024-11-22 08:50:45.914193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.052 [2024-11-22 08:50:45.914204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.052 [2024-11-22 08:50:45.914215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.052 [2024-11-22 08:50:45.914226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.052 [2024-11-22 08:50:45.914237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.052 [2024-11-22 08:50:45.914248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.052 [2024-11-22 08:50:45.914261] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.052 [2024-11-22 08:50:45.914275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:11.052 [2024-11-22 08:50:45.914301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:11.052 [2024-11-22 08:50:45.914313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:11.052 [2024-11-22 08:50:45.914325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:11.052 [2024-11-22 08:50:45.914337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:11.052 [2024-11-22 08:50:45.914349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:11.052 [2024-11-22 08:50:45.914361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:11.052 [2024-11-22 08:50:45.914373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:11.052 [2024-11-22 08:50:45.914385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:11.052 [2024-11-22 08:50:45.914397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:11.052 [2024-11-22 08:50:45.914457] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.052 [2024-11-22 08:50:45.914474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.052 [2024-11-22 08:50:45.914500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.052 [2024-11-22 08:50:45.914512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.052 [2024-11-22 08:50:45.914525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.052 [2024-11-22 08:50:45.914539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.052 [2024-11-22 08:50:45.914551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.053 [2024-11-22 08:50:45.914563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:30:11.053 [2024-11-22 08:50:45.914574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:45.954378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:45.954586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.053 [2024-11-22 08:50:45.954612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.807 ms 00:30:11.053 [2024-11-22 08:50:45.954626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:45.954726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:45.954740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:11.053 [2024-11-22 08:50:45.954752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:11.053 [2024-11-22 08:50:45.954764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.030347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.030390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.053 [2024-11-22 08:50:46.030406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.635 ms 00:30:11.053 [2024-11-22 08:50:46.030436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.030485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.030499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.053 [2024-11-22 08:50:46.030513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:11.053 [2024-11-22 08:50:46.030530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.031069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.031087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.053 [2024-11-22 08:50:46.031101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:30:11.053 [2024-11-22 08:50:46.031113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.031239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.031255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.053 [2024-11-22 08:50:46.031267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:11.053 [2024-11-22 08:50:46.031287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.050829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.050870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.053 [2024-11-22 08:50:46.050906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.550 ms 00:30:11.053 [2024-11-22 08:50:46.050919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.070135] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:11.053 [2024-11-22 08:50:46.070177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:11.053 [2024-11-22 08:50:46.070193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.070221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:11.053 [2024-11-22 08:50:46.070234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.171 ms 00:30:11.053 [2024-11-22 08:50:46.070246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.098741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.098793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:11.053 [2024-11-22 08:50:46.098810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.492 ms 00:30:11.053 [2024-11-22 08:50:46.098822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.053 [2024-11-22 08:50:46.116911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.053 [2024-11-22 08:50:46.116968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:11.053 [2024-11-22 08:50:46.116984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.054 ms 00:30:11.053 [2024-11-22 08:50:46.116996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.134801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.134844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:11.312 [2024-11-22 08:50:46.134859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.787 ms 00:30:11.312 [2024-11-22 08:50:46.134871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.135588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.135626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:11.312 [2024-11-22 08:50:46.135641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:30:11.312 [2024-11-22 08:50:46.135658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.219657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.219725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:11.312 [2024-11-22 08:50:46.219768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.101 ms 00:30:11.312 [2024-11-22 08:50:46.219792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.230247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:11.312 [2024-11-22 08:50:46.233103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.233138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:11.312 [2024-11-22 08:50:46.233155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.275 ms 00:30:11.312 [2024-11-22 08:50:46.233167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.233265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.233280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:11.312 [2024-11-22 08:50:46.233293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:11.312 [2024-11-22 08:50:46.233309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.234316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.234430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:11.312 [2024-11-22 08:50:46.234507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:30:11.312 [2024-11-22 08:50:46.234547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.234612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.234898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:11.312 [2024-11-22 08:50:46.234945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.312 [2024-11-22 08:50:46.235003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.235081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:11.312 [2024-11-22 08:50:46.235274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.235294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:11.312 [2024-11-22 08:50:46.235308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:30:11.312 [2024-11-22 08:50:46.235320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.269963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.270004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:11.312 [2024-11-22 08:50:46.270020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.635 ms 00:30:11.312 [2024-11-22 08:50:46.270056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.270134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.312 [2024-11-22 08:50:46.270162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:11.312 [2024-11-22 08:50:46.270176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:11.312 [2024-11-22 08:50:46.270187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.312 [2024-11-22 08:50:46.271426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.781 ms, result 0 00:30:12.691  [2024-11-22T08:50:48.715Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-22T08:50:49.653Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-22T08:50:50.590Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-22T08:50:51.528Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-22T08:50:52.908Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-22T08:50:53.477Z] Copying: 147/1024 [MB] (24 MBps) [2024-11-22T08:50:54.873Z] Copying: 172/1024 [MB] (24 MBps) [2024-11-22T08:50:55.490Z] Copying: 197/1024 [MB] (25 MBps) [2024-11-22T08:50:56.869Z] Copying: 221/1024 [MB] (24 MBps) [2024-11-22T08:50:57.807Z] Copying: 246/1024 [MB] (24 MBps) [2024-11-22T08:50:58.744Z] Copying: 271/1024 [MB] (24 MBps) [2024-11-22T08:50:59.682Z] Copying: 295/1024 [MB] (24 MBps) [2024-11-22T08:51:00.620Z] Copying: 319/1024 [MB] (24 MBps) [2024-11-22T08:51:01.559Z] Copying: 344/1024 [MB] (24 MBps) [2024-11-22T08:51:02.496Z] Copying: 369/1024 [MB] (25 MBps) [2024-11-22T08:51:03.876Z] Copying: 394/1024 [MB] (25 MBps) [2024-11-22T08:51:04.813Z] Copying: 420/1024 [MB] (25 MBps) [2024-11-22T08:51:05.752Z] Copying: 444/1024 [MB] (24 MBps) [2024-11-22T08:51:06.690Z] Copying: 469/1024 [MB] (25 MBps) [2024-11-22T08:51:07.625Z] Copying: 495/1024 [MB] (25 MBps) [2024-11-22T08:51:08.560Z] Copying: 520/1024 [MB] (24 MBps) [2024-11-22T08:51:09.496Z] Copying: 545/1024 [MB] (24 MBps) [2024-11-22T08:51:10.874Z] Copying: 570/1024 [MB] (24 MBps) [2024-11-22T08:51:11.442Z] Copying: 594/1024 [MB] (24 MBps) [2024-11-22T08:51:12.822Z] Copying: 619/1024 [MB] (25 MBps) [2024-11-22T08:51:13.760Z] Copying: 643/1024 [MB] (23 MBps) [2024-11-22T08:51:14.698Z] Copying: 668/1024 [MB] (24 MBps) [2024-11-22T08:51:15.632Z] Copying: 692/1024 [MB] (24 MBps) [2024-11-22T08:51:16.636Z] Copying: 717/1024 [MB] (24 MBps) [2024-11-22T08:51:17.573Z] Copying: 740/1024 [MB] (23 MBps) [2024-11-22T08:51:18.510Z] Copying: 764/1024 [MB] (24 MBps) [2024-11-22T08:51:19.446Z] Copying: 789/1024 [MB] (24 MBps) [2024-11-22T08:51:20.823Z] Copying: 814/1024 [MB] (25 MBps) [2024-11-22T08:51:21.761Z] Copying: 838/1024 [MB] (24 MBps) [2024-11-22T08:51:22.698Z] Copying: 862/1024 [MB] (23 MBps) [2024-11-22T08:51:23.635Z] Copying: 886/1024 [MB] (24 MBps) [2024-11-22T08:51:24.573Z] Copying: 911/1024 [MB] (24 MBps) [2024-11-22T08:51:25.510Z] Copying: 935/1024 [MB] (24 MBps) [2024-11-22T08:51:26.447Z] Copying: 960/1024 [MB] (24 MBps) [2024-11-22T08:51:27.823Z] Copying: 983/1024 [MB] (23 MBps) [2024-11-22T08:51:28.081Z] Copying: 1008/1024 [MB] (24 MBps) [2024-11-22T08:51:28.339Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-22 08:51:28.211591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.252 [2024-11-22 08:51:28.211910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:53.252 [2024-11-22 08:51:28.211948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:53.252 [2024-11-22 08:51:28.211985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.252 [2024-11-22 08:51:28.212041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:53.252 [2024-11-22 08:51:28.217649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.252 [2024-11-22 08:51:28.217696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:53.252 [2024-11-22 08:51:28.217725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.588 ms 00:30:53.252 [2024-11-22 08:51:28.217740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.252 [2024-11-22 08:51:28.218064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.252 [2024-11-22 08:51:28.218089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:53.252 [2024-11-22 08:51:28.218106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:30:53.252 [2024-11-22 08:51:28.218122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.252 [2024-11-22 08:51:28.221225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.252 [2024-11-22 08:51:28.221252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:53.252 [2024-11-22 08:51:28.221265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:30:53.252 [2024-11-22 08:51:28.221278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.252 [2024-11-22 08:51:28.226311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.252 [2024-11-22 08:51:28.226353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:53.252 [2024-11-22 08:51:28.226366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:30:53.252 [2024-11-22 08:51:28.226377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.252 [2024-11-22 08:51:28.262061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.253 [2024-11-22 08:51:28.262237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:53.253 [2024-11-22 08:51:28.262277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.661 ms 00:30:53.253 [2024-11-22 08:51:28.262290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.253 [2024-11-22 08:51:28.281853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.253 [2024-11-22 08:51:28.282043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:53.253 [2024-11-22 08:51:28.282067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.549 ms 00:30:53.253 [2024-11-22 08:51:28.282081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.253 [2024-11-22 08:51:28.284270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.253 [2024-11-22 08:51:28.284322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:53.253 [2024-11-22 08:51:28.284337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:30:53.253 [2024-11-22 08:51:28.284349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.253 [2024-11-22 08:51:28.318948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.253 [2024-11-22 08:51:28.318996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:53.253 [2024-11-22 08:51:28.319011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.633 ms 00:30:53.253 [2024-11-22 08:51:28.319022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.513 [2024-11-22 08:51:28.353515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.513 [2024-11-22 08:51:28.353569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:53.513 [2024-11-22 08:51:28.353584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.501 ms 00:30:53.513 [2024-11-22 08:51:28.353594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.513 [2024-11-22 08:51:28.386915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.513 [2024-11-22 08:51:28.387096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:53.513 [2024-11-22 08:51:28.387120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.334 ms 00:30:53.513 [2024-11-22 08:51:28.387132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.513 [2024-11-22 08:51:28.420925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.513 [2024-11-22 08:51:28.421101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:53.513 [2024-11-22 08:51:28.421198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.710 ms 00:30:53.513 [2024-11-22 08:51:28.421239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.513 [2024-11-22 08:51:28.421304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:53.513 [2024-11-22 08:51:28.421349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:53.513 [2024-11-22 08:51:28.421415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:53.513 [2024-11-22 08:51:28.421525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.421964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.422862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.423987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.424973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.425987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:53.513 [2024-11-22 08:51:28.426128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:53.514 [2024-11-22 08:51:28.426627] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:53.514 [2024-11-22 08:51:28.426646] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 042fedb5-e077-4992-80f2-cab61d09911c 00:30:53.514 [2024-11-22 08:51:28.426659] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:53.514 [2024-11-22 08:51:28.426671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:53.514 [2024-11-22 08:51:28.426682] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:53.514 [2024-11-22 08:51:28.426694] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:53.514 [2024-11-22 08:51:28.426716] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:53.514 [2024-11-22 08:51:28.426730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:53.514 [2024-11-22 08:51:28.426753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:53.514 [2024-11-22 08:51:28.426764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:53.514 [2024-11-22 08:51:28.426775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:53.514 [2024-11-22 08:51:28.426788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.514 [2024-11-22 08:51:28.426800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:53.514 [2024-11-22 08:51:28.426814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.494 ms 00:30:53.514 [2024-11-22 08:51:28.426826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.445976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.514 [2024-11-22 08:51:28.446008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:53.514 [2024-11-22 08:51:28.446022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.096 ms 00:30:53.514 [2024-11-22 08:51:28.446032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.446581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.514 [2024-11-22 08:51:28.446607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:53.514 [2024-11-22 08:51:28.446627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:30:53.514 [2024-11-22 08:51:28.446638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.496579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.514 [2024-11-22 08:51:28.496755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.514 [2024-11-22 08:51:28.496794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.514 [2024-11-22 08:51:28.496807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.496862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.514 [2024-11-22 08:51:28.496875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.514 [2024-11-22 08:51:28.496895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.514 [2024-11-22 08:51:28.496907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.496999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.514 [2024-11-22 08:51:28.497015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.514 [2024-11-22 08:51:28.497028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.514 [2024-11-22 08:51:28.497040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.514 [2024-11-22 08:51:28.497061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.514 [2024-11-22 08:51:28.497073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.514 [2024-11-22 08:51:28.497085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.514 [2024-11-22 08:51:28.497103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.613074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.613131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.773 [2024-11-22 08:51:28.613147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.613176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.706906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.773 [2024-11-22 08:51:28.707183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.773 [2024-11-22 08:51:28.707325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.773 [2024-11-22 08:51:28.707404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.773 [2024-11-22 08:51:28.707579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:53.773 [2024-11-22 08:51:28.707661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.773 [2024-11-22 08:51:28.707746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.773 [2024-11-22 08:51:28.707817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.773 [2024-11-22 08:51:28.707829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.773 [2024-11-22 08:51:28.707841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.773 [2024-11-22 08:51:28.707993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.158 ms, result 0 00:30:54.709 00:30:54.709 00:30:54.709 08:51:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:56.614 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81011 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81011 ']' 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81011 00:30:56.614 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81011) - No such process 00:30:56.614 Process with pid 81011 is not found 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81011 is not found' 00:30:56.614 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:56.873 Remove shared memory files 00:30:56.873 08:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:56.873 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:56.873 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:57.131 ************************************ 00:30:57.131 END TEST ftl_dirty_shutdown 00:30:57.131 ************************************ 00:30:57.131 00:30:57.131 real 3m39.760s 00:30:57.131 user 4m6.347s 00:30:57.131 sys 0m39.522s 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.131 08:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:57.131 08:51:32 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:57.131 08:51:32 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.131 08:51:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.131 08:51:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:57.131 ************************************ 00:30:57.131 START TEST ftl_upgrade_shutdown 00:30:57.131 ************************************ 00:30:57.131 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:57.131 * Looking for test storage... 00:30:57.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.131 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.131 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.131 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.391 --rc genhtml_branch_coverage=1 00:30:57.391 --rc genhtml_function_coverage=1 00:30:57.391 --rc genhtml_legend=1 00:30:57.391 --rc geninfo_all_blocks=1 00:30:57.391 --rc geninfo_unexecuted_blocks=1 00:30:57.391 00:30:57.391 ' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.391 --rc genhtml_branch_coverage=1 00:30:57.391 --rc genhtml_function_coverage=1 00:30:57.391 --rc genhtml_legend=1 00:30:57.391 --rc geninfo_all_blocks=1 00:30:57.391 --rc geninfo_unexecuted_blocks=1 00:30:57.391 00:30:57.391 ' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.391 --rc genhtml_branch_coverage=1 00:30:57.391 --rc genhtml_function_coverage=1 00:30:57.391 --rc genhtml_legend=1 00:30:57.391 --rc geninfo_all_blocks=1 00:30:57.391 --rc geninfo_unexecuted_blocks=1 00:30:57.391 00:30:57.391 ' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.391 --rc genhtml_branch_coverage=1 00:30:57.391 --rc genhtml_function_coverage=1 00:30:57.391 --rc genhtml_legend=1 00:30:57.391 --rc geninfo_all_blocks=1 00:30:57.391 --rc geninfo_unexecuted_blocks=1 00:30:57.391 00:30:57.391 ' 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:57.391 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83358 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83358 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83358 ']' 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.392 08:51:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:57.392 [2024-11-22 08:51:32.416033] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:30:57.392 [2024-11-22 08:51:32.416397] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83358 ] 00:30:57.651 [2024-11-22 08:51:32.598060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.651 [2024-11-22 08:51:32.705705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:58.588 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:58.848 08:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:59.108 { 00:30:59.108 "name": "basen1", 00:30:59.108 "aliases": [ 00:30:59.108 "bf80cd18-c3ce-47ca-b521-ffc1b2408582" 00:30:59.108 ], 00:30:59.108 "product_name": "NVMe disk", 00:30:59.108 "block_size": 4096, 00:30:59.108 "num_blocks": 1310720, 00:30:59.108 "uuid": "bf80cd18-c3ce-47ca-b521-ffc1b2408582", 00:30:59.108 "numa_id": -1, 00:30:59.108 "assigned_rate_limits": { 00:30:59.108 "rw_ios_per_sec": 0, 00:30:59.108 "rw_mbytes_per_sec": 0, 00:30:59.108 "r_mbytes_per_sec": 0, 00:30:59.108 "w_mbytes_per_sec": 0 00:30:59.108 }, 00:30:59.108 "claimed": true, 00:30:59.108 "claim_type": "read_many_write_one", 00:30:59.108 "zoned": false, 00:30:59.108 "supported_io_types": { 00:30:59.108 "read": true, 00:30:59.108 "write": true, 00:30:59.108 "unmap": true, 00:30:59.108 "flush": true, 00:30:59.108 "reset": true, 00:30:59.108 "nvme_admin": true, 00:30:59.108 "nvme_io": true, 00:30:59.108 "nvme_io_md": false, 00:30:59.108 "write_zeroes": true, 00:30:59.108 "zcopy": false, 00:30:59.108 "get_zone_info": false, 00:30:59.108 "zone_management": false, 00:30:59.108 "zone_append": false, 00:30:59.108 "compare": true, 00:30:59.108 "compare_and_write": false, 00:30:59.108 "abort": true, 00:30:59.108 "seek_hole": false, 00:30:59.108 "seek_data": false, 00:30:59.108 "copy": true, 00:30:59.108 "nvme_iov_md": false 00:30:59.108 }, 00:30:59.108 "driver_specific": { 00:30:59.108 "nvme": [ 00:30:59.108 { 00:30:59.108 "pci_address": "0000:00:11.0", 00:30:59.108 "trid": { 00:30:59.108 "trtype": "PCIe", 00:30:59.108 "traddr": "0000:00:11.0" 00:30:59.108 }, 00:30:59.108 "ctrlr_data": { 00:30:59.108 "cntlid": 0, 00:30:59.108 "vendor_id": "0x1b36", 00:30:59.108 "model_number": "QEMU NVMe Ctrl", 00:30:59.108 "serial_number": "12341", 00:30:59.108 "firmware_revision": "8.0.0", 00:30:59.108 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:59.108 "oacs": { 00:30:59.108 "security": 0, 00:30:59.108 "format": 1, 00:30:59.108 "firmware": 0, 00:30:59.108 "ns_manage": 1 00:30:59.108 }, 00:30:59.108 "multi_ctrlr": false, 00:30:59.108 "ana_reporting": false 00:30:59.108 }, 00:30:59.108 "vs": { 00:30:59.108 "nvme_version": "1.4" 00:30:59.108 }, 00:30:59.108 "ns_data": { 00:30:59.108 "id": 1, 00:30:59.108 "can_share": false 00:30:59.108 } 00:30:59.108 } 00:30:59.108 ], 00:30:59.108 "mp_policy": "active_passive" 00:30:59.108 } 00:30:59.108 } 00:30:59.108 ]' 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:59.108 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:59.367 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=71434ea2-1a09-4a3e-9cee-5be865f54b05 00:30:59.367 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:59.367 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71434ea2-1a09-4a3e-9cee-5be865f54b05 00:30:59.626 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3fb52da3-f9dc-4319-a425-a67f7414c767 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3fb52da3-f9dc-4319-a425-a67f7414c767 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3072c401-309e-4a98-a57b-33bd1ea58d8e 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3072c401-309e-4a98-a57b-33bd1ea58d8e ]] 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3072c401-309e-4a98-a57b-33bd1ea58d8e 5120 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3072c401-309e-4a98-a57b-33bd1ea58d8e 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3072c401-309e-4a98-a57b-33bd1ea58d8e 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3072c401-309e-4a98-a57b-33bd1ea58d8e 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:59.885 08:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3072c401-309e-4a98-a57b-33bd1ea58d8e 00:31:00.145 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.145 { 00:31:00.145 "name": "3072c401-309e-4a98-a57b-33bd1ea58d8e", 00:31:00.145 "aliases": [ 00:31:00.145 "lvs/basen1p0" 00:31:00.145 ], 00:31:00.145 "product_name": "Logical Volume", 00:31:00.145 "block_size": 4096, 00:31:00.145 "num_blocks": 5242880, 00:31:00.145 "uuid": "3072c401-309e-4a98-a57b-33bd1ea58d8e", 00:31:00.145 "assigned_rate_limits": { 00:31:00.145 "rw_ios_per_sec": 0, 00:31:00.145 "rw_mbytes_per_sec": 0, 00:31:00.145 "r_mbytes_per_sec": 0, 00:31:00.145 "w_mbytes_per_sec": 0 00:31:00.145 }, 00:31:00.145 "claimed": false, 00:31:00.145 "zoned": false, 00:31:00.145 "supported_io_types": { 00:31:00.145 "read": true, 00:31:00.145 "write": true, 00:31:00.145 "unmap": true, 00:31:00.145 "flush": false, 00:31:00.145 "reset": true, 00:31:00.145 "nvme_admin": false, 00:31:00.145 "nvme_io": false, 00:31:00.145 "nvme_io_md": false, 00:31:00.145 "write_zeroes": true, 00:31:00.145 "zcopy": false, 00:31:00.145 "get_zone_info": false, 00:31:00.145 "zone_management": false, 00:31:00.145 "zone_append": false, 00:31:00.145 "compare": false, 00:31:00.145 "compare_and_write": false, 00:31:00.145 "abort": false, 00:31:00.145 "seek_hole": true, 00:31:00.145 "seek_data": true, 00:31:00.145 "copy": false, 00:31:00.145 "nvme_iov_md": false 00:31:00.145 }, 00:31:00.145 "driver_specific": { 00:31:00.145 "lvol": { 00:31:00.145 "lvol_store_uuid": "3fb52da3-f9dc-4319-a425-a67f7414c767", 00:31:00.145 "base_bdev": "basen1", 00:31:00.145 "thin_provision": true, 00:31:00.145 "num_allocated_clusters": 0, 00:31:00.145 "snapshot": false, 00:31:00.145 "clone": false, 00:31:00.145 "esnap_clone": false 00:31:00.145 } 00:31:00.145 } 00:31:00.145 } 00:31:00.145 ]' 00:31:00.145 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.145 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.145 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:00.404 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:00.663 08:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3072c401-309e-4a98-a57b-33bd1ea58d8e -c cachen1p0 --l2p_dram_limit 2 00:31:00.923 [2024-11-22 08:51:35.878409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.878674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:00.923 [2024-11-22 08:51:35.878718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:00.923 [2024-11-22 08:51:35.878732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.878818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.878832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:00.923 [2024-11-22 08:51:35.878849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:00.923 [2024-11-22 08:51:35.878861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.878890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:00.923 [2024-11-22 08:51:35.879883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:00.923 [2024-11-22 08:51:35.879915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.879929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:00.923 [2024-11-22 08:51:35.879945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.029 ms 00:31:00.923 [2024-11-22 08:51:35.879969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.880060] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0035fff1-cfb7-4a84-9ef2-6d820e8e15d2 00:31:00.923 [2024-11-22 08:51:35.881545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.881590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:00.923 [2024-11-22 08:51:35.881605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:00.923 [2024-11-22 08:51:35.881621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.889374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.889572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:00.923 [2024-11-22 08:51:35.889668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.714 ms 00:31:00.923 [2024-11-22 08:51:35.889716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.889800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.889904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:00.923 [2024-11-22 08:51:35.889925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:00.923 [2024-11-22 08:51:35.889943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.890052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.890074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:00.923 [2024-11-22 08:51:35.890088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:00.923 [2024-11-22 08:51:35.890107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.890139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:00.923 [2024-11-22 08:51:35.895028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.895067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:00.923 [2024-11-22 08:51:35.895086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.903 ms 00:31:00.923 [2024-11-22 08:51:35.895115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.895151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.895164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:00.923 [2024-11-22 08:51:35.895180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:00.923 [2024-11-22 08:51:35.895193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.895238] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:00.923 [2024-11-22 08:51:35.895379] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:00.923 [2024-11-22 08:51:35.895402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:00.923 [2024-11-22 08:51:35.895417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:00.923 [2024-11-22 08:51:35.895435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:00.923 [2024-11-22 08:51:35.895449] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:00.923 [2024-11-22 08:51:35.895465] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:00.923 [2024-11-22 08:51:35.895477] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:00.923 [2024-11-22 08:51:35.895495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:00.923 [2024-11-22 08:51:35.895507] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:00.923 [2024-11-22 08:51:35.895521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.895533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:00.923 [2024-11-22 08:51:35.895549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:31:00.923 [2024-11-22 08:51:35.895561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.895639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.923 [2024-11-22 08:51:35.895652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:00.923 [2024-11-22 08:51:35.895668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:31:00.923 [2024-11-22 08:51:35.895693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.923 [2024-11-22 08:51:35.895796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:00.923 [2024-11-22 08:51:35.895812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:00.923 [2024-11-22 08:51:35.895827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:00.923 [2024-11-22 08:51:35.895839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.923 [2024-11-22 08:51:35.895854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:00.923 [2024-11-22 08:51:35.895866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.895880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:00.924 [2024-11-22 08:51:35.895892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:00.924 [2024-11-22 08:51:35.895906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:00.924 [2024-11-22 08:51:35.895917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.895931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:00.924 [2024-11-22 08:51:35.895943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:00.924 [2024-11-22 08:51:35.895957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.895968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:00.924 [2024-11-22 08:51:35.896178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:00.924 [2024-11-22 08:51:35.896231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.896275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:00.924 [2024-11-22 08:51:35.896311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:00.924 [2024-11-22 08:51:35.896410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.896452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:00.924 [2024-11-22 08:51:35.896490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:00.924 [2024-11-22 08:51:35.896607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:00.924 [2024-11-22 08:51:35.896653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:00.924 [2024-11-22 08:51:35.896689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:00.924 [2024-11-22 08:51:35.896770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:00.924 [2024-11-22 08:51:35.896843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:00.924 [2024-11-22 08:51:35.896886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:00.924 [2024-11-22 08:51:35.896978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:00.924 [2024-11-22 08:51:35.897027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:00.924 [2024-11-22 08:51:35.897064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:00.924 [2024-11-22 08:51:35.897129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:00.924 [2024-11-22 08:51:35.897201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:00.924 [2024-11-22 08:51:35.897248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:00.924 [2024-11-22 08:51:35.897324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:00.924 [2024-11-22 08:51:35.897436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:00.924 [2024-11-22 08:51:35.897456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:00.924 [2024-11-22 08:51:35.897485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:00.924 [2024-11-22 08:51:35.897523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:00.924 [2024-11-22 08:51:35.897537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:00.924 [2024-11-22 08:51:35.897564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:00.924 [2024-11-22 08:51:35.897577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:00.924 [2024-11-22 08:51:35.897595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:00.924 [2024-11-22 08:51:35.897608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:00.924 [2024-11-22 08:51:35.897626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:00.924 [2024-11-22 08:51:35.897637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:00.924 [2024-11-22 08:51:35.897652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:00.924 [2024-11-22 08:51:35.897663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:00.924 [2024-11-22 08:51:35.897678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:00.924 [2024-11-22 08:51:35.897696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:00.924 [2024-11-22 08:51:35.897715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:00.924 [2024-11-22 08:51:35.897748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:00.924 [2024-11-22 08:51:35.897789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:00.924 [2024-11-22 08:51:35.897804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:00.924 [2024-11-22 08:51:35.897816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:00.924 [2024-11-22 08:51:35.897832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:00.924 [2024-11-22 08:51:35.897933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:00.924 [2024-11-22 08:51:35.897950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:00.924 [2024-11-22 08:51:35.897996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:00.924 [2024-11-22 08:51:35.898009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:00.924 [2024-11-22 08:51:35.898024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:00.924 [2024-11-22 08:51:35.898039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.924 [2024-11-22 08:51:35.898055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:00.924 [2024-11-22 08:51:35.898068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.303 ms 00:31:00.924 [2024-11-22 08:51:35.898084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.924 [2024-11-22 08:51:35.898138] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:00.924 [2024-11-22 08:51:35.898159] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:05.120 [2024-11-22 08:51:39.346912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.347239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:05.121 [2024-11-22 08:51:39.347269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3454.371 ms 00:31:05.121 [2024-11-22 08:51:39.347286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.385829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.385886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:05.121 [2024-11-22 08:51:39.385903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.296 ms 00:31:05.121 [2024-11-22 08:51:39.385918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.386036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.386055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:05.121 [2024-11-22 08:51:39.386089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:05.121 [2024-11-22 08:51:39.386108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.430555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.430604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:05.121 [2024-11-22 08:51:39.430620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.450 ms 00:31:05.121 [2024-11-22 08:51:39.430635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.430670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.430690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:05.121 [2024-11-22 08:51:39.430703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:05.121 [2024-11-22 08:51:39.430724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.431258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.431280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:05.121 [2024-11-22 08:51:39.431294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:31:05.121 [2024-11-22 08:51:39.431309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.431364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.431380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:05.121 [2024-11-22 08:51:39.431396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:05.121 [2024-11-22 08:51:39.431414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.451893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.451944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:05.121 [2024-11-22 08:51:39.451990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.488 ms 00:31:05.121 [2024-11-22 08:51:39.452006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.463016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:05.121 [2024-11-22 08:51:39.464255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.464290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:05.121 [2024-11-22 08:51:39.464310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.160 ms 00:31:05.121 [2024-11-22 08:51:39.464323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.502470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.502512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:05.121 [2024-11-22 08:51:39.502532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.171 ms 00:31:05.121 [2024-11-22 08:51:39.502543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.502635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.502652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:05.121 [2024-11-22 08:51:39.502670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:05.121 [2024-11-22 08:51:39.502682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.536986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.537157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:05.121 [2024-11-22 08:51:39.537204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.272 ms 00:31:05.121 [2024-11-22 08:51:39.537217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.570664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.570704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:05.121 [2024-11-22 08:51:39.570744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.445 ms 00:31:05.121 [2024-11-22 08:51:39.570757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.571527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.571561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:05.121 [2024-11-22 08:51:39.571579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:31:05.121 [2024-11-22 08:51:39.571591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.669326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.669509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:05.121 [2024-11-22 08:51:39.669561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.820 ms 00:31:05.121 [2024-11-22 08:51:39.669574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.705224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.705269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:05.121 [2024-11-22 08:51:39.705299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.610 ms 00:31:05.121 [2024-11-22 08:51:39.705310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.740274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.740315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:05.121 [2024-11-22 08:51:39.740334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.969 ms 00:31:05.121 [2024-11-22 08:51:39.740347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.773936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.773983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:05.121 [2024-11-22 08:51:39.774001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.593 ms 00:31:05.121 [2024-11-22 08:51:39.774029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.774081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.774095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:05.121 [2024-11-22 08:51:39.774113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:05.121 [2024-11-22 08:51:39.774125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.774232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.121 [2024-11-22 08:51:39.774246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:05.121 [2024-11-22 08:51:39.774266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:05.121 [2024-11-22 08:51:39.774278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.121 [2024-11-22 08:51:39.775342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3902.809 ms, result 0 00:31:05.121 { 00:31:05.121 "name": "ftl", 00:31:05.121 "uuid": "0035fff1-cfb7-4a84-9ef2-6d820e8e15d2" 00:31:05.121 } 00:31:05.121 08:51:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:05.121 [2024-11-22 08:51:39.994210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.121 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:05.461 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:05.461 [2024-11-22 08:51:40.397938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:05.461 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:05.721 [2024-11-22 08:51:40.599578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:05.721 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:05.980 Fill FTL, iteration 1 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83480 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83480 /var/tmp/spdk.tgt.sock 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83480 ']' 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.980 08:51:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.980 [2024-11-22 08:51:41.048137] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:05.980 [2024-11-22 08:51:41.048250] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83480 ] 00:31:06.240 [2024-11-22 08:51:41.227123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.500 [2024-11-22 08:51:41.340009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.438 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.438 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:07.438 08:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:07.439 ftln1 00:31:07.439 08:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:07.439 08:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:07.697 08:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83480 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83480 ']' 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83480 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83480 00:31:07.698 killing process with pid 83480 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83480' 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83480 00:31:07.698 08:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83480 00:31:10.236 08:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:10.236 08:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:10.236 [2024-11-22 08:51:45.069550] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:10.236 [2024-11-22 08:51:45.069679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83537 ] 00:31:10.236 [2024-11-22 08:51:45.248144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.496 [2024-11-22 08:51:45.360416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.875  [2024-11-22T08:51:47.900Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-22T08:51:48.838Z] Copying: 490/1024 [MB] (246 MBps) [2024-11-22T08:51:50.213Z] Copying: 739/1024 [MB] (249 MBps) [2024-11-22T08:51:50.213Z] Copying: 976/1024 [MB] (237 MBps) [2024-11-22T08:51:51.152Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:31:16.065 00:31:16.065 Calculate MD5 checksum, iteration 1 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:16.065 08:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:16.325 [2024-11-22 08:51:51.200449] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:16.325 [2024-11-22 08:51:51.200783] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83601 ] 00:31:16.325 [2024-11-22 08:51:51.381672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.584 [2024-11-22 08:51:51.492486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.962  [2024-11-22T08:51:53.618Z] Copying: 698/1024 [MB] (698 MBps) [2024-11-22T08:51:54.555Z] Copying: 1024/1024 [MB] (average 701 MBps) 00:31:19.468 00:31:19.468 08:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:19.468 08:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=00cc30aa75037d4f79b4ed2b6f781693 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:20.900 Fill FTL, iteration 2 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:20.900 08:51:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:21.159 [2024-11-22 08:51:56.021633] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:21.159 [2024-11-22 08:51:56.021906] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83657 ] 00:31:21.159 [2024-11-22 08:51:56.204258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.417 [2024-11-22 08:51:56.320490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.796  [2024-11-22T08:51:58.820Z] Copying: 250/1024 [MB] (250 MBps) [2024-11-22T08:52:00.197Z] Copying: 490/1024 [MB] (240 MBps) [2024-11-22T08:52:00.765Z] Copying: 731/1024 [MB] (241 MBps) [2024-11-22T08:52:01.023Z] Copying: 973/1024 [MB] (242 MBps) [2024-11-22T08:52:02.427Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:31:27.340 00:31:27.340 Calculate MD5 checksum, iteration 2 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:27.340 08:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:27.340 [2024-11-22 08:52:02.192514] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:27.340 [2024-11-22 08:52:02.192793] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83721 ] 00:31:27.340 [2024-11-22 08:52:02.371975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.600 [2024-11-22 08:52:02.481612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.505  [2024-11-22T08:52:04.592Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-22T08:52:06.499Z] Copying: 1024/1024 [MB] (average 712 MBps) 00:31:31.412 00:31:31.412 08:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:31.412 08:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=52b9c32c7fb81d40df46cf26e9c005cd 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:33.317 [2024-11-22 08:52:08.224051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.317 [2024-11-22 08:52:08.224100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:33.317 [2024-11-22 08:52:08.224116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:33.317 [2024-11-22 08:52:08.224143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.317 [2024-11-22 08:52:08.224181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.317 [2024-11-22 08:52:08.224192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:33.317 [2024-11-22 08:52:08.224202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:33.317 [2024-11-22 08:52:08.224216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.317 [2024-11-22 08:52:08.224235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.317 [2024-11-22 08:52:08.224246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:33.317 [2024-11-22 08:52:08.224256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:33.317 [2024-11-22 08:52:08.224265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.317 [2024-11-22 08:52:08.224324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.280 ms, result 0 00:31:33.317 true 00:31:33.317 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:33.577 { 00:31:33.577 "name": "ftl", 00:31:33.577 "properties": [ 00:31:33.577 { 00:31:33.577 "name": "superblock_version", 00:31:33.577 "value": 5, 00:31:33.577 "read-only": true 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "name": "base_device", 00:31:33.577 "bands": [ 00:31:33.577 { 00:31:33.577 "id": 0, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 1, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 2, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 3, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 4, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 5, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 6, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 7, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 8, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 9, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 10, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 11, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 12, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 13, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 14, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 15, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 16, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 17, 00:31:33.577 "state": "FREE", 00:31:33.577 "validity": 0.0 00:31:33.577 } 00:31:33.577 ], 00:31:33.577 "read-only": true 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "name": "cache_device", 00:31:33.577 "type": "bdev", 00:31:33.577 "chunks": [ 00:31:33.577 { 00:31:33.577 "id": 0, 00:31:33.577 "state": "INACTIVE", 00:31:33.577 "utilization": 0.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 1, 00:31:33.577 "state": "CLOSED", 00:31:33.577 "utilization": 1.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 2, 00:31:33.577 "state": "CLOSED", 00:31:33.577 "utilization": 1.0 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 3, 00:31:33.577 "state": "OPEN", 00:31:33.577 "utilization": 0.001953125 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "id": 4, 00:31:33.577 "state": "OPEN", 00:31:33.577 "utilization": 0.0 00:31:33.577 } 00:31:33.577 ], 00:31:33.577 "read-only": true 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "name": "verbose_mode", 00:31:33.577 "value": true, 00:31:33.577 "unit": "", 00:31:33.577 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:33.577 }, 00:31:33.577 { 00:31:33.577 "name": "prep_upgrade_on_shutdown", 00:31:33.577 "value": false, 00:31:33.577 "unit": "", 00:31:33.577 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:33.578 } 00:31:33.578 ] 00:31:33.578 } 00:31:33.578 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:33.578 [2024-11-22 08:52:08.639730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.578 [2024-11-22 08:52:08.639790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:33.578 [2024-11-22 08:52:08.639805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:33.578 [2024-11-22 08:52:08.639831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.578 [2024-11-22 08:52:08.639856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.578 [2024-11-22 08:52:08.639867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:33.578 [2024-11-22 08:52:08.639876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:33.578 [2024-11-22 08:52:08.639886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.578 [2024-11-22 08:52:08.639906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.578 [2024-11-22 08:52:08.639917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:33.578 [2024-11-22 08:52:08.639927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:33.578 [2024-11-22 08:52:08.639937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.578 [2024-11-22 08:52:08.640015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:31:33.578 true 00:31:33.837 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:33.837 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:33.837 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:33.837 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:33.838 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:33.838 08:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:34.098 [2024-11-22 08:52:09.047435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.098 [2024-11-22 08:52:09.047480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:34.098 [2024-11-22 08:52:09.047495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:34.098 [2024-11-22 08:52:09.047520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.098 [2024-11-22 08:52:09.047545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.098 [2024-11-22 08:52:09.047556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:34.098 [2024-11-22 08:52:09.047566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:34.098 [2024-11-22 08:52:09.047575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.098 [2024-11-22 08:52:09.047595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.098 [2024-11-22 08:52:09.047605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:34.098 [2024-11-22 08:52:09.047615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:34.098 [2024-11-22 08:52:09.047625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.098 [2024-11-22 08:52:09.047683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.235 ms, result 0 00:31:34.098 true 00:31:34.098 08:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:34.357 { 00:31:34.357 "name": "ftl", 00:31:34.357 "properties": [ 00:31:34.357 { 00:31:34.357 "name": "superblock_version", 00:31:34.357 "value": 5, 00:31:34.357 "read-only": true 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "name": "base_device", 00:31:34.357 "bands": [ 00:31:34.357 { 00:31:34.357 "id": 0, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 1, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 2, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 3, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 4, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 5, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 6, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 7, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 8, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 9, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 10, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 11, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 12, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 13, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 14, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 15, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 16, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 17, 00:31:34.357 "state": "FREE", 00:31:34.357 "validity": 0.0 00:31:34.357 } 00:31:34.357 ], 00:31:34.357 "read-only": true 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "name": "cache_device", 00:31:34.357 "type": "bdev", 00:31:34.357 "chunks": [ 00:31:34.357 { 00:31:34.357 "id": 0, 00:31:34.357 "state": "INACTIVE", 00:31:34.357 "utilization": 0.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 1, 00:31:34.357 "state": "CLOSED", 00:31:34.357 "utilization": 1.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 2, 00:31:34.357 "state": "CLOSED", 00:31:34.357 "utilization": 1.0 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 3, 00:31:34.357 "state": "OPEN", 00:31:34.357 "utilization": 0.001953125 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "id": 4, 00:31:34.357 "state": "OPEN", 00:31:34.357 "utilization": 0.0 00:31:34.357 } 00:31:34.357 ], 00:31:34.357 "read-only": true 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "name": "verbose_mode", 00:31:34.357 "value": true, 00:31:34.357 "unit": "", 00:31:34.357 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:34.357 }, 00:31:34.357 { 00:31:34.357 "name": "prep_upgrade_on_shutdown", 00:31:34.357 "value": true, 00:31:34.357 "unit": "", 00:31:34.357 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:34.357 } 00:31:34.357 ] 00:31:34.357 } 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83358 ]] 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83358 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83358 ']' 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83358 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83358 00:31:34.357 killing process with pid 83358 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83358' 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83358 00:31:34.357 08:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83358 00:31:35.295 [2024-11-22 08:52:10.361663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:35.555 [2024-11-22 08:52:10.381389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-11-22 08:52:10.381429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:35.555 [2024-11-22 08:52:10.381444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:35.555 [2024-11-22 08:52:10.381455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.555 [2024-11-22 08:52:10.381478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:35.555 [2024-11-22 08:52:10.385730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.555 [2024-11-22 08:52:10.385759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:35.555 [2024-11-22 08:52:10.385770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.244 ms 00:31:35.555 [2024-11-22 08:52:10.385780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.399787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.399840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:43.675 [2024-11-22 08:52:17.399857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7025.366 ms 00:31:43.675 [2024-11-22 08:52:17.399867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.400928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.400964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:43.675 [2024-11-22 08:52:17.400977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:31:43.675 [2024-11-22 08:52:17.400987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.401901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.401922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:43.675 [2024-11-22 08:52:17.401934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.874 ms 00:31:43.675 [2024-11-22 08:52:17.401945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.416280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.416420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:43.675 [2024-11-22 08:52:17.416457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.294 ms 00:31:43.675 [2024-11-22 08:52:17.416468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.425397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.425434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:43.675 [2024-11-22 08:52:17.425446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.885 ms 00:31:43.675 [2024-11-22 08:52:17.425456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.425546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.425558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:43.675 [2024-11-22 08:52:17.425569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:43.675 [2024-11-22 08:52:17.425584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.439437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.439587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:43.675 [2024-11-22 08:52:17.439607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.860 ms 00:31:43.675 [2024-11-22 08:52:17.439617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.453519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.453648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:43.675 [2024-11-22 08:52:17.453683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.891 ms 00:31:43.675 [2024-11-22 08:52:17.453693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.467584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.467728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:43.675 [2024-11-22 08:52:17.467747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.852 ms 00:31:43.675 [2024-11-22 08:52:17.467757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.481401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.481537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:43.675 [2024-11-22 08:52:17.481571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.590 ms 00:31:43.675 [2024-11-22 08:52:17.481581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.481638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:43.675 [2024-11-22 08:52:17.481654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.675 [2024-11-22 08:52:17.481668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.675 [2024-11-22 08:52:17.481691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:43.675 [2024-11-22 08:52:17.481702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.675 [2024-11-22 08:52:17.481856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:43.675 [2024-11-22 08:52:17.481866] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0035fff1-cfb7-4a84-9ef2-6d820e8e15d2 00:31:43.675 [2024-11-22 08:52:17.481876] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:43.675 [2024-11-22 08:52:17.481886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:43.675 [2024-11-22 08:52:17.481895] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:43.675 [2024-11-22 08:52:17.481905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:43.675 [2024-11-22 08:52:17.481915] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:43.675 [2024-11-22 08:52:17.481925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:43.675 [2024-11-22 08:52:17.481935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:43.675 [2024-11-22 08:52:17.481944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:43.675 [2024-11-22 08:52:17.481971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:43.675 [2024-11-22 08:52:17.481983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.482013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:43.675 [2024-11-22 08:52:17.482028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:31:43.675 [2024-11-22 08:52:17.482038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.500689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.500813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:43.675 [2024-11-22 08:52:17.500848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.650 ms 00:31:43.675 [2024-11-22 08:52:17.500858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.501378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.675 [2024-11-22 08:52:17.501391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:43.675 [2024-11-22 08:52:17.501401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:31:43.675 [2024-11-22 08:52:17.501411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.563437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.675 [2024-11-22 08:52:17.563585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:43.675 [2024-11-22 08:52:17.563606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.675 [2024-11-22 08:52:17.563617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.563654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.675 [2024-11-22 08:52:17.563664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:43.675 [2024-11-22 08:52:17.563674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.675 [2024-11-22 08:52:17.563684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.675 [2024-11-22 08:52:17.563757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.675 [2024-11-22 08:52:17.563769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:43.675 [2024-11-22 08:52:17.563780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.563790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.563811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.563822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:43.676 [2024-11-22 08:52:17.563832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.563841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.678968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.679018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:43.676 [2024-11-22 08:52:17.679031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.679057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.773547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.773597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:43.676 [2024-11-22 08:52:17.773610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.773620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.773714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.773726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:43.676 [2024-11-22 08:52:17.773736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.773746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.773790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.773806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:43.676 [2024-11-22 08:52:17.773816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.773826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.773925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.773937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:43.676 [2024-11-22 08:52:17.773947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.773980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.774034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.774046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:43.676 [2024-11-22 08:52:17.774061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.774090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.774127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.774153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:43.676 [2024-11-22 08:52:17.774164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.774174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.774215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.676 [2024-11-22 08:52:17.774226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:43.676 [2024-11-22 08:52:17.774241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.676 [2024-11-22 08:52:17.774251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.676 [2024-11-22 08:52:17.774366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7404.954 ms, result 0 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83920 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83920 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83920 ']' 00:31:46.211 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.212 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.212 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.212 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.212 08:52:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:46.212 [2024-11-22 08:52:20.878409] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:46.212 [2024-11-22 08:52:20.878555] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83920 ] 00:31:46.212 [2024-11-22 08:52:21.061828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.212 [2024-11-22 08:52:21.159664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.149 [2024-11-22 08:52:22.061538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:47.149 [2024-11-22 08:52:22.061600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:47.149 [2024-11-22 08:52:22.209631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.149 [2024-11-22 08:52:22.209676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:47.149 [2024-11-22 08:52:22.209692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:47.149 [2024-11-22 08:52:22.209703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.150 [2024-11-22 08:52:22.209757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.150 [2024-11-22 08:52:22.209769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:47.150 [2024-11-22 08:52:22.209779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:47.150 [2024-11-22 08:52:22.209789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.150 [2024-11-22 08:52:22.209818] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:47.150 [2024-11-22 08:52:22.210807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:47.150 [2024-11-22 08:52:22.210836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.150 [2024-11-22 08:52:22.210847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:47.150 [2024-11-22 08:52:22.210858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.029 ms 00:31:47.150 [2024-11-22 08:52:22.210868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.150 [2024-11-22 08:52:22.212345] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:47.410 [2024-11-22 08:52:22.231026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.231066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:47.410 [2024-11-22 08:52:22.231087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.711 ms 00:31:47.410 [2024-11-22 08:52:22.231097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.231158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.231171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:47.410 [2024-11-22 08:52:22.231182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:47.410 [2024-11-22 08:52:22.231192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.238274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.238312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:47.410 [2024-11-22 08:52:22.238325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.015 ms 00:31:47.410 [2024-11-22 08:52:22.238351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.238415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.238429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:47.410 [2024-11-22 08:52:22.238440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:31:47.410 [2024-11-22 08:52:22.238450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.238494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.238506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:47.410 [2024-11-22 08:52:22.238520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:47.410 [2024-11-22 08:52:22.238530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.238556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:47.410 [2024-11-22 08:52:22.243304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.243340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:47.410 [2024-11-22 08:52:22.243352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.762 ms 00:31:47.410 [2024-11-22 08:52:22.243367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.243395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.410 [2024-11-22 08:52:22.243406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:47.410 [2024-11-22 08:52:22.243418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:47.410 [2024-11-22 08:52:22.243428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.410 [2024-11-22 08:52:22.243488] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:47.410 [2024-11-22 08:52:22.243512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:47.411 [2024-11-22 08:52:22.243550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:47.411 [2024-11-22 08:52:22.243567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:47.411 [2024-11-22 08:52:22.243655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:47.411 [2024-11-22 08:52:22.243669] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:47.411 [2024-11-22 08:52:22.243682] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:47.411 [2024-11-22 08:52:22.243700] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:47.411 [2024-11-22 08:52:22.243712] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:47.411 [2024-11-22 08:52:22.243727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:47.411 [2024-11-22 08:52:22.243737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:47.411 [2024-11-22 08:52:22.243747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:47.411 [2024-11-22 08:52:22.243757] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:47.411 [2024-11-22 08:52:22.243768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.411 [2024-11-22 08:52:22.243778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:47.411 [2024-11-22 08:52:22.243799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:31:47.411 [2024-11-22 08:52:22.243809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.411 [2024-11-22 08:52:22.243881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.411 [2024-11-22 08:52:22.243892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:47.411 [2024-11-22 08:52:22.243901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:47.411 [2024-11-22 08:52:22.243915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.411 [2024-11-22 08:52:22.244036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:47.411 [2024-11-22 08:52:22.244051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:47.411 [2024-11-22 08:52:22.244061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:47.411 [2024-11-22 08:52:22.244091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:47.411 [2024-11-22 08:52:22.244110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:47.411 [2024-11-22 08:52:22.244121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:47.411 [2024-11-22 08:52:22.244131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:47.411 [2024-11-22 08:52:22.244178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:47.411 [2024-11-22 08:52:22.244187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:47.411 [2024-11-22 08:52:22.244206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:47.411 [2024-11-22 08:52:22.244216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:47.411 [2024-11-22 08:52:22.244235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:47.411 [2024-11-22 08:52:22.244244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:47.411 [2024-11-22 08:52:22.244263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:47.411 [2024-11-22 08:52:22.244292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:47.411 [2024-11-22 08:52:22.244330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:47.411 [2024-11-22 08:52:22.244358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:47.411 [2024-11-22 08:52:22.244386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:47.411 [2024-11-22 08:52:22.244414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:47.411 [2024-11-22 08:52:22.244440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:47.411 [2024-11-22 08:52:22.244468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:47.411 [2024-11-22 08:52:22.244476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244489] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:47.411 [2024-11-22 08:52:22.244507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:47.411 [2024-11-22 08:52:22.244518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:47.411 [2024-11-22 08:52:22.244542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:47.411 [2024-11-22 08:52:22.244552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:47.411 [2024-11-22 08:52:22.244561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:47.411 [2024-11-22 08:52:22.244570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:47.411 [2024-11-22 08:52:22.244579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:47.411 [2024-11-22 08:52:22.244589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:47.411 [2024-11-22 08:52:22.244599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:47.411 [2024-11-22 08:52:22.244611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:47.411 [2024-11-22 08:52:22.244623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:47.411 [2024-11-22 08:52:22.244633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:47.411 [2024-11-22 08:52:22.244644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:47.411 [2024-11-22 08:52:22.244654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:47.411 [2024-11-22 08:52:22.244665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:47.411 [2024-11-22 08:52:22.244675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:47.411 [2024-11-22 08:52:22.244685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:47.412 [2024-11-22 08:52:22.244696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:47.412 [2024-11-22 08:52:22.244768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:47.412 [2024-11-22 08:52:22.244779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:47.412 [2024-11-22 08:52:22.244800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:47.412 [2024-11-22 08:52:22.244810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:47.412 [2024-11-22 08:52:22.244821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:47.412 [2024-11-22 08:52:22.244834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.412 [2024-11-22 08:52:22.244844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:47.412 [2024-11-22 08:52:22.244854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:31:47.412 [2024-11-22 08:52:22.244864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.412 [2024-11-22 08:52:22.244909] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:47.412 [2024-11-22 08:52:22.244922] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:50.703 [2024-11-22 08:52:25.732871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.703 [2024-11-22 08:52:25.732932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:50.703 [2024-11-22 08:52:25.732949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3493.623 ms 00:31:50.703 [2024-11-22 08:52:25.732976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.703 [2024-11-22 08:52:25.770234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.703 [2024-11-22 08:52:25.770487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:50.703 [2024-11-22 08:52:25.770513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.997 ms 00:31:50.703 [2024-11-22 08:52:25.770525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.703 [2024-11-22 08:52:25.770616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.703 [2024-11-22 08:52:25.770635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:50.703 [2024-11-22 08:52:25.770647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:50.703 [2024-11-22 08:52:25.770657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.963 [2024-11-22 08:52:25.817434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.963 [2024-11-22 08:52:25.817595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:50.963 [2024-11-22 08:52:25.817620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.807 ms 00:31:50.964 [2024-11-22 08:52:25.817636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.817679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.817690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:50.964 [2024-11-22 08:52:25.817702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:50.964 [2024-11-22 08:52:25.817712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.818231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.818247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:50.964 [2024-11-22 08:52:25.818260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:31:50.964 [2024-11-22 08:52:25.818270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.818320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.818331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:50.964 [2024-11-22 08:52:25.818342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:50.964 [2024-11-22 08:52:25.818353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.838364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.838402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:50.964 [2024-11-22 08:52:25.838417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.021 ms 00:31:50.964 [2024-11-22 08:52:25.838427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.857900] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:50.964 [2024-11-22 08:52:25.857938] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:50.964 [2024-11-22 08:52:25.857982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.857993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:50.964 [2024-11-22 08:52:25.858004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.442 ms 00:31:50.964 [2024-11-22 08:52:25.858015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.877338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.877376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:50.964 [2024-11-22 08:52:25.877389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.310 ms 00:31:50.964 [2024-11-22 08:52:25.877415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.895210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.895246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:50.964 [2024-11-22 08:52:25.895258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.779 ms 00:31:50.964 [2024-11-22 08:52:25.895267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.912930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.912976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:50.964 [2024-11-22 08:52:25.913004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.651 ms 00:31:50.964 [2024-11-22 08:52:25.913014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:25.913800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:25.913829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:50.964 [2024-11-22 08:52:25.913841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.673 ms 00:31:50.964 [2024-11-22 08:52:25.913851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.013756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.013816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:50.964 [2024-11-22 08:52:26.013831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.044 ms 00:31:50.964 [2024-11-22 08:52:26.013858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.024580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:50.964 [2024-11-22 08:52:26.025485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.025514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:50.964 [2024-11-22 08:52:26.025528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.594 ms 00:31:50.964 [2024-11-22 08:52:26.025538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.025623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.025639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:50.964 [2024-11-22 08:52:26.025651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:50.964 [2024-11-22 08:52:26.025662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.025723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.025735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:50.964 [2024-11-22 08:52:26.025746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:50.964 [2024-11-22 08:52:26.025756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.025779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.025789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:50.964 [2024-11-22 08:52:26.025800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:50.964 [2024-11-22 08:52:26.025813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.964 [2024-11-22 08:52:26.025850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:50.964 [2024-11-22 08:52:26.025863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.964 [2024-11-22 08:52:26.025873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:50.964 [2024-11-22 08:52:26.025884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:50.964 [2024-11-22 08:52:26.025894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.224 [2024-11-22 08:52:26.060556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.224 [2024-11-22 08:52:26.060599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:51.224 [2024-11-22 08:52:26.060613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.697 ms 00:31:51.224 [2024-11-22 08:52:26.060623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.224 [2024-11-22 08:52:26.060698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.224 [2024-11-22 08:52:26.060711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:51.224 [2024-11-22 08:52:26.060721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:51.224 [2024-11-22 08:52:26.060731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.224 [2024-11-22 08:52:26.061837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3858.025 ms, result 0 00:31:51.224 [2024-11-22 08:52:26.076868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.224 [2024-11-22 08:52:26.092858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:51.224 [2024-11-22 08:52:26.101473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:51.224 08:52:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.224 08:52:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:51.224 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:51.224 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:51.224 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:51.484 [2024-11-22 08:52:26.333131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.484 [2024-11-22 08:52:26.333174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:51.484 [2024-11-22 08:52:26.333189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:51.484 [2024-11-22 08:52:26.333202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.484 [2024-11-22 08:52:26.333226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.484 [2024-11-22 08:52:26.333236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:51.484 [2024-11-22 08:52:26.333246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:51.484 [2024-11-22 08:52:26.333256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.484 [2024-11-22 08:52:26.333275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.484 [2024-11-22 08:52:26.333286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:51.484 [2024-11-22 08:52:26.333296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:51.484 [2024-11-22 08:52:26.333305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.484 [2024-11-22 08:52:26.333361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.219 ms, result 0 00:31:51.484 true 00:31:51.484 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.484 { 00:31:51.484 "name": "ftl", 00:31:51.484 "properties": [ 00:31:51.484 { 00:31:51.484 "name": "superblock_version", 00:31:51.484 "value": 5, 00:31:51.484 "read-only": true 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "name": "base_device", 00:31:51.484 "bands": [ 00:31:51.484 { 00:31:51.484 "id": 0, 00:31:51.484 "state": "CLOSED", 00:31:51.484 "validity": 1.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 1, 00:31:51.484 "state": "CLOSED", 00:31:51.484 "validity": 1.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 2, 00:31:51.484 "state": "CLOSED", 00:31:51.484 "validity": 0.007843137254901933 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 3, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 4, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 5, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 6, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 7, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 8, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 9, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 10, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 11, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 12, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 13, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 14, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 15, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 16, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 17, 00:31:51.484 "state": "FREE", 00:31:51.484 "validity": 0.0 00:31:51.484 } 00:31:51.484 ], 00:31:51.484 "read-only": true 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "name": "cache_device", 00:31:51.484 "type": "bdev", 00:31:51.484 "chunks": [ 00:31:51.484 { 00:31:51.484 "id": 0, 00:31:51.484 "state": "INACTIVE", 00:31:51.484 "utilization": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 1, 00:31:51.484 "state": "OPEN", 00:31:51.484 "utilization": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 2, 00:31:51.484 "state": "OPEN", 00:31:51.484 "utilization": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 3, 00:31:51.484 "state": "FREE", 00:31:51.484 "utilization": 0.0 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "id": 4, 00:31:51.484 "state": "FREE", 00:31:51.484 "utilization": 0.0 00:31:51.484 } 00:31:51.484 ], 00:31:51.484 "read-only": true 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "name": "verbose_mode", 00:31:51.484 "value": true, 00:31:51.484 "unit": "", 00:31:51.484 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:51.484 }, 00:31:51.484 { 00:31:51.484 "name": "prep_upgrade_on_shutdown", 00:31:51.484 "value": false, 00:31:51.484 "unit": "", 00:31:51.484 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:51.484 } 00:31:51.484 ] 00:31:51.484 } 00:31:51.484 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:51.484 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.484 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:51.743 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:51.744 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:51.744 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:51.744 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.744 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:52.003 Validate MD5 checksum, iteration 1 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:52.003 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:52.004 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:52.004 08:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:52.004 [2024-11-22 08:52:27.035305] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:52.004 [2024-11-22 08:52:27.035600] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84001 ] 00:31:52.263 [2024-11-22 08:52:27.213861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.263 [2024-11-22 08:52:27.325466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.225  [2024-11-22T08:52:29.571Z] Copying: 724/1024 [MB] (724 MBps) [2024-11-22T08:52:30.952Z] Copying: 1024/1024 [MB] (average 720 MBps) 00:31:55.865 00:31:55.865 08:52:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:55.865 08:52:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:57.773 Validate MD5 checksum, iteration 2 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=00cc30aa75037d4f79b4ed2b6f781693 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 00cc30aa75037d4f79b4ed2b6f781693 != \0\0\c\c\3\0\a\a\7\5\0\3\7\d\4\f\7\9\b\4\e\d\2\b\6\f\7\8\1\6\9\3 ]] 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:57.773 08:52:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:57.773 [2024-11-22 08:52:32.632848] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:31:57.773 [2024-11-22 08:52:32.633160] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84065 ] 00:31:57.773 [2024-11-22 08:52:32.808503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.033 [2024-11-22 08:52:32.916682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.942  [2024-11-22T08:52:35.029Z] Copying: 718/1024 [MB] (718 MBps) [2024-11-22T08:52:38.319Z] Copying: 1024/1024 [MB] (average 716 MBps) 00:32:03.232 00:32:03.232 08:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:03.232 08:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52b9c32c7fb81d40df46cf26e9c005cd 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52b9c32c7fb81d40df46cf26e9c005cd != \5\2\b\9\c\3\2\c\7\f\b\8\1\d\4\0\d\f\4\6\c\f\2\6\e\9\c\0\0\5\c\d ]] 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83920 ]] 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83920 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84138 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84138 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84138 ']' 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.611 08:52:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:04.611 [2024-11-22 08:52:39.567364] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:32:04.611 [2024-11-22 08:52:39.568237] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84138 ] 00:32:04.611 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83920 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:04.868 [2024-11-22 08:52:39.748176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.868 [2024-11-22 08:52:39.853416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.801 [2024-11-22 08:52:40.789510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:05.801 [2024-11-22 08:52:40.790070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:06.061 [2024-11-22 08:52:40.935482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.935526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:06.061 [2024-11-22 08:52:40.935542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:06.061 [2024-11-22 08:52:40.935552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.935603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.935615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:06.061 [2024-11-22 08:52:40.935625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:32:06.061 [2024-11-22 08:52:40.935635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.935662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:06.061 [2024-11-22 08:52:40.936633] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:06.061 [2024-11-22 08:52:40.936662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.936673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:06.061 [2024-11-22 08:52:40.936683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.011 ms 00:32:06.061 [2024-11-22 08:52:40.936693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.937042] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:06.061 [2024-11-22 08:52:40.959978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.960016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:06.061 [2024-11-22 08:52:40.960030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.973 ms 00:32:06.061 [2024-11-22 08:52:40.960040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.973666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.973701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:06.061 [2024-11-22 08:52:40.973716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:06.061 [2024-11-22 08:52:40.973727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.974208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.974223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:06.061 [2024-11-22 08:52:40.974234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:32:06.061 [2024-11-22 08:52:40.974244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.974298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.974314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:06.061 [2024-11-22 08:52:40.974324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:06.061 [2024-11-22 08:52:40.974334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.974359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.974369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:06.061 [2024-11-22 08:52:40.974378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:06.061 [2024-11-22 08:52:40.974388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.974409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:06.061 [2024-11-22 08:52:40.978294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.978324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:06.061 [2024-11-22 08:52:40.978336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.896 ms 00:32:06.061 [2024-11-22 08:52:40.978361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.978394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.978405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:06.061 [2024-11-22 08:52:40.978415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:06.061 [2024-11-22 08:52:40.978424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.978459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:06.061 [2024-11-22 08:52:40.978481] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:06.061 [2024-11-22 08:52:40.978513] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:06.061 [2024-11-22 08:52:40.978533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:06.061 [2024-11-22 08:52:40.978618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:06.061 [2024-11-22 08:52:40.978631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:06.061 [2024-11-22 08:52:40.978643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:06.061 [2024-11-22 08:52:40.978656] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:06.061 [2024-11-22 08:52:40.978667] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:06.061 [2024-11-22 08:52:40.978677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:06.061 [2024-11-22 08:52:40.978687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:06.061 [2024-11-22 08:52:40.978696] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:06.061 [2024-11-22 08:52:40.978705] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:06.061 [2024-11-22 08:52:40.978715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.978738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:06.061 [2024-11-22 08:52:40.978747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:32:06.061 [2024-11-22 08:52:40.978757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.978827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.061 [2024-11-22 08:52:40.978837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:06.061 [2024-11-22 08:52:40.978846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:06.061 [2024-11-22 08:52:40.978856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.061 [2024-11-22 08:52:40.978940] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:06.061 [2024-11-22 08:52:40.978952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:06.061 [2024-11-22 08:52:40.978984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:06.061 [2024-11-22 08:52:40.978994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:06.061 [2024-11-22 08:52:40.979014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:06.061 [2024-11-22 08:52:40.979032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:06.061 [2024-11-22 08:52:40.979043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:06.061 [2024-11-22 08:52:40.979055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:06.061 [2024-11-22 08:52:40.979073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:06.061 [2024-11-22 08:52:40.979082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:06.061 [2024-11-22 08:52:40.979100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:06.061 [2024-11-22 08:52:40.979109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:06.061 [2024-11-22 08:52:40.979143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:06.061 [2024-11-22 08:52:40.979162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:06.061 [2024-11-22 08:52:40.979180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:06.061 [2024-11-22 08:52:40.979189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:06.061 [2024-11-22 08:52:40.979198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:06.061 [2024-11-22 08:52:40.979217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:06.061 [2024-11-22 08:52:40.979226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:06.061 [2024-11-22 08:52:40.979235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:06.061 [2024-11-22 08:52:40.979245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:06.061 [2024-11-22 08:52:40.979254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:06.061 [2024-11-22 08:52:40.979263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:06.061 [2024-11-22 08:52:40.979272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:06.061 [2024-11-22 08:52:40.979280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:06.061 [2024-11-22 08:52:40.979289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:06.061 [2024-11-22 08:52:40.979298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:06.061 [2024-11-22 08:52:40.979307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:06.061 [2024-11-22 08:52:40.979324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:06.061 [2024-11-22 08:52:40.979332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.061 [2024-11-22 08:52:40.979341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:06.062 [2024-11-22 08:52:40.979349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:06.062 [2024-11-22 08:52:40.979358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.062 [2024-11-22 08:52:40.979366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:06.062 [2024-11-22 08:52:40.979378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:06.062 [2024-11-22 08:52:40.979387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.062 [2024-11-22 08:52:40.979395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:06.062 [2024-11-22 08:52:40.979405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:06.062 [2024-11-22 08:52:40.979414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:06.062 [2024-11-22 08:52:40.979423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:06.062 [2024-11-22 08:52:40.979433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:06.062 [2024-11-22 08:52:40.979442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:06.062 [2024-11-22 08:52:40.979451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:06.062 [2024-11-22 08:52:40.979460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:06.062 [2024-11-22 08:52:40.979469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:06.062 [2024-11-22 08:52:40.979493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:06.062 [2024-11-22 08:52:40.979504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:06.062 [2024-11-22 08:52:40.979516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:06.062 [2024-11-22 08:52:40.979537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:06.062 [2024-11-22 08:52:40.979567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:06.062 [2024-11-22 08:52:40.979578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:06.062 [2024-11-22 08:52:40.979588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:06.062 [2024-11-22 08:52:40.979598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:06.062 [2024-11-22 08:52:40.979670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:06.062 [2024-11-22 08:52:40.979681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:06.062 [2024-11-22 08:52:40.979702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:06.062 [2024-11-22 08:52:40.979714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:06.062 [2024-11-22 08:52:40.979725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:06.062 [2024-11-22 08:52:40.979735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:40.979749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:06.062 [2024-11-22 08:52:40.979760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:32:06.062 [2024-11-22 08:52:40.979770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.014974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.015009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:06.062 [2024-11-22 08:52:41.015022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.211 ms 00:32:06.062 [2024-11-22 08:52:41.015032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.015068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.015078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:06.062 [2024-11-22 08:52:41.015088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:06.062 [2024-11-22 08:52:41.015098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.060501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.060537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:06.062 [2024-11-22 08:52:41.060549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.425 ms 00:32:06.062 [2024-11-22 08:52:41.060559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.060587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.060597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:06.062 [2024-11-22 08:52:41.060607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:06.062 [2024-11-22 08:52:41.060616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.060740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.060753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:06.062 [2024-11-22 08:52:41.060763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:32:06.062 [2024-11-22 08:52:41.060772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.060809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.060820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:06.062 [2024-11-22 08:52:41.060829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:06.062 [2024-11-22 08:52:41.060839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.081111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.081144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:06.062 [2024-11-22 08:52:41.081156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.281 ms 00:32:06.062 [2024-11-22 08:52:41.081166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.081278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.081293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:06.062 [2024-11-22 08:52:41.081303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:06.062 [2024-11-22 08:52:41.081312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.062 [2024-11-22 08:52:41.131313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.062 [2024-11-22 08:52:41.131353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:06.062 [2024-11-22 08:52:41.131368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.062 ms 00:32:06.062 [2024-11-22 08:52:41.131380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.145505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.145538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:06.322 [2024-11-22 08:52:41.145556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.629 ms 00:32:06.322 [2024-11-22 08:52:41.145565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.225961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.226015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:06.322 [2024-11-22 08:52:41.226036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 80.463 ms 00:32:06.322 [2024-11-22 08:52:41.226046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.226212] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:06.322 [2024-11-22 08:52:41.226320] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:06.322 [2024-11-22 08:52:41.226415] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:06.322 [2024-11-22 08:52:41.226508] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:06.322 [2024-11-22 08:52:41.226520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.226530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:06.322 [2024-11-22 08:52:41.226541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:32:06.322 [2024-11-22 08:52:41.226550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.226614] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:06.322 [2024-11-22 08:52:41.226627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.226641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:06.322 [2024-11-22 08:52:41.226651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:06.322 [2024-11-22 08:52:41.226661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.247519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.247680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:06.322 [2024-11-22 08:52:41.247717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.870 ms 00:32:06.322 [2024-11-22 08:52:41.247728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.260463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.260498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:06.322 [2024-11-22 08:52:41.260510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:06.322 [2024-11-22 08:52:41.260519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.322 [2024-11-22 08:52:41.260602] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:06.322 [2024-11-22 08:52:41.260782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.322 [2024-11-22 08:52:41.260796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:06.322 [2024-11-22 08:52:41.260806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:32:06.322 [2024-11-22 08:52:41.260816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.890 [2024-11-22 08:52:41.829538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.890 [2024-11-22 08:52:41.829608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:06.890 [2024-11-22 08:52:41.829626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 568.544 ms 00:32:06.890 [2024-11-22 08:52:41.829637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.890 [2024-11-22 08:52:41.835199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.890 [2024-11-22 08:52:41.835239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:06.890 [2024-11-22 08:52:41.835253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:32:06.890 [2024-11-22 08:52:41.835264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.890 [2024-11-22 08:52:41.835657] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:06.890 [2024-11-22 08:52:41.835681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.890 [2024-11-22 08:52:41.835692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:06.890 [2024-11-22 08:52:41.835704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:32:06.890 [2024-11-22 08:52:41.835714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.890 [2024-11-22 08:52:41.835743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.890 [2024-11-22 08:52:41.835755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:06.890 [2024-11-22 08:52:41.835765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:06.890 [2024-11-22 08:52:41.835775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.890 [2024-11-22 08:52:41.835816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 576.147 ms, result 0 00:32:06.890 [2024-11-22 08:52:41.835854] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:06.890 [2024-11-22 08:52:41.835931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.890 [2024-11-22 08:52:41.835941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:06.890 [2024-11-22 08:52:41.835950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:32:06.890 [2024-11-22 08:52:41.835971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.458 [2024-11-22 08:52:42.414593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.458 [2024-11-22 08:52:42.414650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:07.458 [2024-11-22 08:52:42.414666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 578.452 ms 00:32:07.458 [2024-11-22 08:52:42.414676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.458 [2024-11-22 08:52:42.420429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.458 [2024-11-22 08:52:42.420468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:07.458 [2024-11-22 08:52:42.420481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.248 ms 00:32:07.458 [2024-11-22 08:52:42.420491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.458 [2024-11-22 08:52:42.420880] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:07.458 [2024-11-22 08:52:42.420902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.458 [2024-11-22 08:52:42.420912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:07.458 [2024-11-22 08:52:42.420923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.382 ms 00:32:07.458 [2024-11-22 08:52:42.420933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.458 [2024-11-22 08:52:42.420975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.458 [2024-11-22 08:52:42.420988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:07.459 [2024-11-22 08:52:42.420998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:07.459 [2024-11-22 08:52:42.421008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.421046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 586.136 ms, result 0 00:32:07.459 [2024-11-22 08:52:42.421084] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:07.459 [2024-11-22 08:52:42.421097] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:07.459 [2024-11-22 08:52:42.421109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.421120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:07.459 [2024-11-22 08:52:42.421130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1162.410 ms 00:32:07.459 [2024-11-22 08:52:42.421140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.421169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.421181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:07.459 [2024-11-22 08:52:42.421196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:07.459 [2024-11-22 08:52:42.421206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.432175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:07.459 [2024-11-22 08:52:42.432429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.432474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:07.459 [2024-11-22 08:52:42.432559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.224 ms 00:32:07.459 [2024-11-22 08:52:42.432595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.433227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.433350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:07.459 [2024-11-22 08:52:42.433440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:32:07.459 [2024-11-22 08:52:42.433476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.435517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.435660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:07.459 [2024-11-22 08:52:42.435741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.997 ms 00:32:07.459 [2024-11-22 08:52:42.435777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.435844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.436042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:07.459 [2024-11-22 08:52:42.436083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:07.459 [2024-11-22 08:52:42.436121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.436249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.436291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:07.459 [2024-11-22 08:52:42.436375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:07.459 [2024-11-22 08:52:42.436406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.436448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.436480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:07.459 [2024-11-22 08:52:42.436510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:07.459 [2024-11-22 08:52:42.436542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.436671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:07.459 [2024-11-22 08:52:42.436718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.436747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:07.459 [2024-11-22 08:52:42.436778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:32:07.459 [2024-11-22 08:52:42.436808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.436898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.459 [2024-11-22 08:52:42.436933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:07.459 [2024-11-22 08:52:42.437030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:07.459 [2024-11-22 08:52:42.437094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.459 [2024-11-22 08:52:42.438036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1504.559 ms, result 0 00:32:07.459 [2024-11-22 08:52:42.452875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.459 [2024-11-22 08:52:42.468848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:07.459 [2024-11-22 08:52:42.478169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:07.459 Validate MD5 checksum, iteration 1 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:07.459 08:52:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:07.718 [2024-11-22 08:52:42.617320] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:32:07.718 [2024-11-22 08:52:42.617621] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84178 ] 00:32:07.718 [2024-11-22 08:52:42.797861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.976 [2024-11-22 08:52:42.908722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.879  [2024-11-22T08:52:45.225Z] Copying: 726/1024 [MB] (726 MBps) [2024-11-22T08:52:46.677Z] Copying: 1024/1024 [MB] (average 720 MBps) 00:32:11.590 00:32:11.590 08:52:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:11.590 08:52:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:13.494 Validate MD5 checksum, iteration 2 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=00cc30aa75037d4f79b4ed2b6f781693 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 00cc30aa75037d4f79b4ed2b6f781693 != \0\0\c\c\3\0\a\a\7\5\0\3\7\d\4\f\7\9\b\4\e\d\2\b\6\f\7\8\1\6\9\3 ]] 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:13.494 08:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:13.494 [2024-11-22 08:52:48.201837] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:32:13.494 [2024-11-22 08:52:48.202117] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84234 ] 00:32:13.494 [2024-11-22 08:52:48.383599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.494 [2024-11-22 08:52:48.494008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.397  [2024-11-22T08:52:50.742Z] Copying: 724/1024 [MB] (724 MBps) [2024-11-22T08:52:52.120Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:32:17.033 00:32:17.033 08:52:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:17.033 08:52:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52b9c32c7fb81d40df46cf26e9c005cd 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52b9c32c7fb81d40df46cf26e9c005cd != \5\2\b\9\c\3\2\c\7\f\b\8\1\d\4\0\d\f\4\6\c\f\2\6\e\9\c\0\0\5\c\d ]] 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:18.410 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:18.411 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:18.411 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:18.411 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84138 ]] 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84138 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84138 ']' 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84138 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84138 00:32:18.670 killing process with pid 84138 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84138' 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84138 00:32:18.670 08:52:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84138 00:32:20.049 [2024-11-22 08:52:54.733457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:20.049 [2024-11-22 08:52:54.753396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.753441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:20.049 [2024-11-22 08:52:54.753456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:20.049 [2024-11-22 08:52:54.753466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.049 [2024-11-22 08:52:54.753504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:20.049 [2024-11-22 08:52:54.757482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.757514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:20.049 [2024-11-22 08:52:54.757542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.968 ms 00:32:20.049 [2024-11-22 08:52:54.757556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.049 [2024-11-22 08:52:54.757749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.757761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:20.049 [2024-11-22 08:52:54.757772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:32:20.049 [2024-11-22 08:52:54.757782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.049 [2024-11-22 08:52:54.759202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.759241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:20.049 [2024-11-22 08:52:54.759254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.404 ms 00:32:20.049 [2024-11-22 08:52:54.759264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.049 [2024-11-22 08:52:54.760267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.760296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:20.049 [2024-11-22 08:52:54.760308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:32:20.049 [2024-11-22 08:52:54.760318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.049 [2024-11-22 08:52:54.774968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.049 [2024-11-22 08:52:54.775004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:20.050 [2024-11-22 08:52:54.775018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.641 ms 00:32:20.050 [2024-11-22 08:52:54.775035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.782936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.782979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:20.050 [2024-11-22 08:52:54.782992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.877 ms 00:32:20.050 [2024-11-22 08:52:54.783003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.783101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.783115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:20.050 [2024-11-22 08:52:54.783126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:32:20.050 [2024-11-22 08:52:54.783136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.797589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.797624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:20.050 [2024-11-22 08:52:54.797636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.453 ms 00:32:20.050 [2024-11-22 08:52:54.797645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.811672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.811708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:20.050 [2024-11-22 08:52:54.811721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.000 ms 00:32:20.050 [2024-11-22 08:52:54.811730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.825495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.825531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:20.050 [2024-11-22 08:52:54.825543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.737 ms 00:32:20.050 [2024-11-22 08:52:54.825552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.839174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.839207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:20.050 [2024-11-22 08:52:54.839235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.560 ms 00:32:20.050 [2024-11-22 08:52:54.839245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.839278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:20.050 [2024-11-22 08:52:54.839295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:20.050 [2024-11-22 08:52:54.839307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:20.050 [2024-11-22 08:52:54.839318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:20.050 [2024-11-22 08:52:54.839329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:20.050 [2024-11-22 08:52:54.839486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:20.050 [2024-11-22 08:52:54.839496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0035fff1-cfb7-4a84-9ef2-6d820e8e15d2 00:32:20.050 [2024-11-22 08:52:54.839507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:20.050 [2024-11-22 08:52:54.839516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:20.050 [2024-11-22 08:52:54.839526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:20.050 [2024-11-22 08:52:54.839535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:20.050 [2024-11-22 08:52:54.839545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:20.050 [2024-11-22 08:52:54.839554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:20.050 [2024-11-22 08:52:54.839564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:20.050 [2024-11-22 08:52:54.839573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:20.050 [2024-11-22 08:52:54.839587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:20.050 [2024-11-22 08:52:54.839601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.839617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:20.050 [2024-11-22 08:52:54.839628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:32:20.050 [2024-11-22 08:52:54.839638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.858648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.858684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:20.050 [2024-11-22 08:52:54.858696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.011 ms 00:32:20.050 [2024-11-22 08:52:54.858706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.859279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:20.050 [2024-11-22 08:52:54.859297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:20.050 [2024-11-22 08:52:54.859308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:32:20.050 [2024-11-22 08:52:54.859318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.919935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:54.919987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:20.050 [2024-11-22 08:52:54.920016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:54.920026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.920061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:54.920071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:20.050 [2024-11-22 08:52:54.920081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:54.920091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.920167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:54.920180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:20.050 [2024-11-22 08:52:54.920191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:54.920201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:54.920218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:54.920233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:20.050 [2024-11-22 08:52:54.920243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:54.920253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.035372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.035423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:20.050 [2024-11-22 08:52:55.035453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:55.035463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.130085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.130138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:20.050 [2024-11-22 08:52:55.130151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:55.130161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.130270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.130283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:20.050 [2024-11-22 08:52:55.130294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:55.130304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.130350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.130362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:20.050 [2024-11-22 08:52:55.130376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:55.130396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.130494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.130507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:20.050 [2024-11-22 08:52:55.130517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.050 [2024-11-22 08:52:55.130527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.050 [2024-11-22 08:52:55.130562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.050 [2024-11-22 08:52:55.130590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:20.309 [2024-11-22 08:52:55.130601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.309 [2024-11-22 08:52:55.130615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.309 [2024-11-22 08:52:55.130652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.309 [2024-11-22 08:52:55.130663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:20.309 [2024-11-22 08:52:55.130674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.309 [2024-11-22 08:52:55.130684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.309 [2024-11-22 08:52:55.130737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:20.309 [2024-11-22 08:52:55.130749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:20.309 [2024-11-22 08:52:55.130763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:20.309 [2024-11-22 08:52:55.130774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:20.309 [2024-11-22 08:52:55.130894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 378.076 ms, result 0 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:21.245 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:21.245 Remove shared memory files 00:32:21.246 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:21.246 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:21.246 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:21.505 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83920 00:32:21.505 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:21.505 08:52:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:21.505 00:32:21.505 real 1m24.290s 00:32:21.505 user 1m56.272s 00:32:21.505 sys 0m21.198s 00:32:21.505 08:52:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.505 08:52:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:21.505 ************************************ 00:32:21.505 END TEST ftl_upgrade_shutdown 00:32:21.505 ************************************ 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@14 -- # killprocess 76643 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@958 -- # kill -0 76643 00:32:21.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76643) - No such process 00:32:21.505 Process with pid 76643 is not found 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76643 is not found' 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84356 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:21.505 08:52:56 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84356 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 84356 ']' 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.505 08:52:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:21.505 [2024-11-22 08:52:56.508287] Starting SPDK v25.01-pre git sha1 a6ed92877 / DPDK 24.03.0 initialization... 00:32:21.505 [2024-11-22 08:52:56.508423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84356 ] 00:32:21.765 [2024-11-22 08:52:56.687157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.765 [2024-11-22 08:52:56.788253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.703 08:52:57 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.703 08:52:57 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:22.703 08:52:57 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:22.963 nvme0n1 00:32:22.963 08:52:57 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:22.963 08:52:57 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:22.963 08:52:57 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:23.222 08:52:58 ftl -- ftl/common.sh@28 -- # stores=3fb52da3-f9dc-4319-a425-a67f7414c767 00:32:23.222 08:52:58 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:23.222 08:52:58 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fb52da3-f9dc-4319-a425-a67f7414c767 00:32:23.482 08:52:58 ftl -- ftl/ftl.sh@23 -- # killprocess 84356 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@954 -- # '[' -z 84356 ']' 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@958 -- # kill -0 84356 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@959 -- # uname 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84356 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.482 killing process with pid 84356 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84356' 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@973 -- # kill 84356 00:32:23.482 08:52:58 ftl -- common/autotest_common.sh@978 -- # wait 84356 00:32:26.018 08:53:00 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:26.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:26.018 Waiting for block devices as requested 00:32:26.018 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:26.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:26.277 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:26.535 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.805 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:31.805 08:53:06 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:31.805 Remove shared memory files 00:32:31.805 08:53:06 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:31.805 08:53:06 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:31.805 08:53:06 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:31.805 08:53:06 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:31.805 08:53:06 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:31.805 08:53:06 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:31.805 00:32:31.805 real 11m24.000s 00:32:31.805 user 13m44.105s 00:32:31.805 sys 1m28.771s 00:32:31.805 08:53:06 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.805 08:53:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:31.805 ************************************ 00:32:31.805 END TEST ftl 00:32:31.805 ************************************ 00:32:31.805 08:53:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:31.805 08:53:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:31.805 08:53:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:31.805 08:53:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:31.805 08:53:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:31.805 08:53:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:31.805 08:53:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:31.805 08:53:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:31.805 08:53:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:31.805 08:53:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:31.805 08:53:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.805 08:53:06 -- common/autotest_common.sh@10 -- # set +x 00:32:31.805 08:53:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:31.805 08:53:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:31.805 08:53:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:31.805 08:53:06 -- common/autotest_common.sh@10 -- # set +x 00:32:34.342 INFO: APP EXITING 00:32:34.342 INFO: killing all VMs 00:32:34.342 INFO: killing vhost app 00:32:34.342 INFO: EXIT DONE 00:32:34.342 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:34.909 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:34.909 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:34.909 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:34.909 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:35.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:35.737 Cleaning 00:32:35.737 Removing: /var/run/dpdk/spdk0/config 00:32:35.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:35.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:35.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:35.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:35.737 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:35.737 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:35.737 Removing: /var/run/dpdk/spdk0 00:32:35.737 Removing: /var/run/dpdk/spdk_pid57516 00:32:35.737 Removing: /var/run/dpdk/spdk_pid57756 00:32:35.737 Removing: /var/run/dpdk/spdk_pid57985 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58095 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58140 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58279 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58297 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58507 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58619 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58726 00:32:35.737 Removing: /var/run/dpdk/spdk_pid58853 00:32:35.997 Removing: /var/run/dpdk/spdk_pid58961 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59001 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59043 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59119 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59236 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59683 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59758 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59834 00:32:35.997 Removing: /var/run/dpdk/spdk_pid59851 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60009 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60025 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60180 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60201 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60271 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60289 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60353 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60376 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60577 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60608 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60697 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60891 00:32:35.997 Removing: /var/run/dpdk/spdk_pid60986 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61028 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61477 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61586 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61706 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61759 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61785 00:32:35.997 Removing: /var/run/dpdk/spdk_pid61869 00:32:35.997 Removing: /var/run/dpdk/spdk_pid62517 00:32:35.997 Removing: /var/run/dpdk/spdk_pid62559 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63050 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63152 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63268 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63321 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63347 00:32:35.997 Removing: /var/run/dpdk/spdk_pid63372 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65279 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65434 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65438 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65450 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65498 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65502 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65514 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65559 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65563 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65575 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65625 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65629 00:32:35.997 Removing: /var/run/dpdk/spdk_pid65641 00:32:35.997 Removing: /var/run/dpdk/spdk_pid67061 00:32:35.997 Removing: /var/run/dpdk/spdk_pid67179 00:32:35.997 Removing: /var/run/dpdk/spdk_pid68609 00:32:35.997 Removing: /var/run/dpdk/spdk_pid70366 00:32:35.997 Removing: /var/run/dpdk/spdk_pid70445 00:32:35.997 Removing: /var/run/dpdk/spdk_pid70526 00:32:36.256 Removing: /var/run/dpdk/spdk_pid70636 00:32:36.256 Removing: /var/run/dpdk/spdk_pid70734 00:32:36.256 Removing: /var/run/dpdk/spdk_pid70830 00:32:36.256 Removing: /var/run/dpdk/spdk_pid70909 00:32:36.256 Removing: /var/run/dpdk/spdk_pid70990 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71100 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71198 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71295 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71380 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71455 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71566 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71658 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71759 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71839 00:32:36.256 Removing: /var/run/dpdk/spdk_pid71917 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72024 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72121 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72217 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72302 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72376 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72455 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72535 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72642 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72745 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72845 00:32:36.256 Removing: /var/run/dpdk/spdk_pid72932 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73012 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73092 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73177 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73286 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73381 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73532 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73827 00:32:36.256 Removing: /var/run/dpdk/spdk_pid73869 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74321 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74505 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74607 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74723 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74782 00:32:36.256 Removing: /var/run/dpdk/spdk_pid74806 00:32:36.256 Removing: /var/run/dpdk/spdk_pid75104 00:32:36.256 Removing: /var/run/dpdk/spdk_pid75170 00:32:36.256 Removing: /var/run/dpdk/spdk_pid75262 00:32:36.256 Removing: /var/run/dpdk/spdk_pid75691 00:32:36.256 Removing: /var/run/dpdk/spdk_pid75837 00:32:36.256 Removing: /var/run/dpdk/spdk_pid76643 00:32:36.256 Removing: /var/run/dpdk/spdk_pid76786 00:32:36.256 Removing: /var/run/dpdk/spdk_pid76979 00:32:36.256 Removing: /var/run/dpdk/spdk_pid77087 00:32:36.256 Removing: /var/run/dpdk/spdk_pid77390 00:32:36.256 Removing: /var/run/dpdk/spdk_pid77639 00:32:36.256 Removing: /var/run/dpdk/spdk_pid78005 00:32:36.256 Removing: /var/run/dpdk/spdk_pid78210 00:32:36.256 Removing: /var/run/dpdk/spdk_pid78351 00:32:36.256 Removing: /var/run/dpdk/spdk_pid78415 00:32:36.516 Removing: /var/run/dpdk/spdk_pid78560 00:32:36.516 Removing: /var/run/dpdk/spdk_pid78595 00:32:36.516 Removing: /var/run/dpdk/spdk_pid78663 00:32:36.516 Removing: /var/run/dpdk/spdk_pid78872 00:32:36.516 Removing: /var/run/dpdk/spdk_pid79108 00:32:36.516 Removing: /var/run/dpdk/spdk_pid79559 00:32:36.516 Removing: /var/run/dpdk/spdk_pid80023 00:32:36.516 Removing: /var/run/dpdk/spdk_pid80480 00:32:36.516 Removing: /var/run/dpdk/spdk_pid81011 00:32:36.516 Removing: /var/run/dpdk/spdk_pid81154 00:32:36.516 Removing: /var/run/dpdk/spdk_pid81251 00:32:36.516 Removing: /var/run/dpdk/spdk_pid81879 00:32:36.516 Removing: /var/run/dpdk/spdk_pid81952 00:32:36.516 Removing: /var/run/dpdk/spdk_pid82418 00:32:36.516 Removing: /var/run/dpdk/spdk_pid82815 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83358 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83480 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83537 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83601 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83657 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83721 00:32:36.516 Removing: /var/run/dpdk/spdk_pid83920 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84001 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84065 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84138 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84178 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84234 00:32:36.516 Removing: /var/run/dpdk/spdk_pid84356 00:32:36.516 Clean 00:32:36.516 08:53:11 -- common/autotest_common.sh@1453 -- # return 0 00:32:36.516 08:53:11 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:36.516 08:53:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.516 08:53:11 -- common/autotest_common.sh@10 -- # set +x 00:32:36.775 08:53:11 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:36.775 08:53:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.775 08:53:11 -- common/autotest_common.sh@10 -- # set +x 00:32:36.775 08:53:11 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:36.775 08:53:11 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:36.775 08:53:11 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:36.775 08:53:11 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:36.775 08:53:11 -- spdk/autotest.sh@398 -- # hostname 00:32:36.775 08:53:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:37.033 geninfo: WARNING: invalid characters removed from testname! 00:33:03.633 08:53:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.633 08:53:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:05.543 08:53:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.451 08:53:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.357 08:53:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.301 08:53:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:13.836 08:53:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:13.836 08:53:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:13.836 08:53:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:13.836 08:53:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:13.836 08:53:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:13.836 08:53:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:13.836 + [[ -n 5259 ]] 00:33:13.836 + sudo kill 5259 00:33:13.843 [Pipeline] } 00:33:13.858 [Pipeline] // timeout 00:33:13.863 [Pipeline] } 00:33:13.872 [Pipeline] // stage 00:33:13.876 [Pipeline] } 00:33:13.884 [Pipeline] // catchError 00:33:13.891 [Pipeline] stage 00:33:13.893 [Pipeline] { (Stop VM) 00:33:13.902 [Pipeline] sh 00:33:14.182 + vagrant halt 00:33:17.523 ==> default: Halting domain... 00:33:24.109 [Pipeline] sh 00:33:24.387 + vagrant destroy -f 00:33:27.671 ==> default: Removing domain... 00:33:28.253 [Pipeline] sh 00:33:28.537 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:28.547 [Pipeline] } 00:33:28.561 [Pipeline] // stage 00:33:28.567 [Pipeline] } 00:33:28.580 [Pipeline] // dir 00:33:28.585 [Pipeline] } 00:33:28.599 [Pipeline] // wrap 00:33:28.606 [Pipeline] } 00:33:28.618 [Pipeline] // catchError 00:33:28.627 [Pipeline] stage 00:33:28.629 [Pipeline] { (Epilogue) 00:33:28.642 [Pipeline] sh 00:33:28.924 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:35.520 [Pipeline] catchError 00:33:35.522 [Pipeline] { 00:33:35.531 [Pipeline] sh 00:33:35.810 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:36.069 Artifacts sizes are good 00:33:36.090 [Pipeline] } 00:33:36.104 [Pipeline] // catchError 00:33:36.113 [Pipeline] archiveArtifacts 00:33:36.120 Archiving artifacts 00:33:36.219 [Pipeline] cleanWs 00:33:36.229 [WS-CLEANUP] Deleting project workspace... 00:33:36.229 [WS-CLEANUP] Deferred wipeout is used... 00:33:36.236 [WS-CLEANUP] done 00:33:36.251 [Pipeline] } 00:33:36.265 [Pipeline] // stage 00:33:36.271 [Pipeline] } 00:33:36.283 [Pipeline] // node 00:33:36.288 [Pipeline] End of Pipeline 00:33:36.316 Finished: SUCCESS